Category Archives: Tech

Monte Carlo Method in R (with worked examples)

Monte Carlo method is a handy tool for transforming problems of probabilistic nature into deterministic computations using the law of large numbers. Imagine that you want to asses the future value of your investments and see what is the worst-case scenario for a given level of probability. Or that you want to plan the production of your factory given past daily performance of individual workers to ensure that you will meet a tough delivery plan with high enough probability. For such and many more real-life tasks you can use the Monte Carlo method.

Monte Carlo approximation of Pi

Continue reading

Intuition vs Unsupervised Learning – Agglomerative Clustering in practice

Clustering is a hugely important step of exploratory data analysis and finds plenty of great applications. Typically, clustering technique will identify different groups of observations among your data. For example, if you need to perform market segmentation, cluster analysis will help you with labeling each segment so that you can evaluate each segment’s potential and target some attractive segments. Therefore, your marketing program and positioning strategy rely heavily on the very fundamental step – grouping of your observations and creation of meaningful segments. We may also find many more use cases in computer science, biology, medicine or social science. However, it often turns out to be quite difficult to define properly how a well-separated cluster looks like.

Today, I will discuss some technical aspects of hierarchical cluster analysis, namely Agglomerative Clustering. One great advantage of this hierarchical approach would be fully automatic selection of the appropriate number of clusters. This is because in genuine unsupervised learning problem, we have no idea how many clusters we should look for! Also, in my view, this clever clustering technique solves some ambiguity issues regarding vague definition of a cluster and thus is more than suitable for automatic detection of such structures. On the other hand, the agglomerative clustering process employs standard metrics for clustering quality description. Therefore, it will be fairly easy to observe what is going on. Continue reading

Understanding the behavior of regression performance metrics

When building predictive models, you obviously need to pay close attention to their performance. That is essentially what it is all about – getting the prediction right. Especially if you are working for paying clients you need to prove that the performance of your models is good enough for their business. Fortunately, there is a whole bunch of statistical metrics and tools at hand for assessing model’s performance.

In my experience, performance metrics for (especially binary) classification tasks such as confusion matrix and derived metrics are naturally understood by almost anyone. A bit more problematic is the situation for regression and time series. For example when you want to predict future sales or want to derive income from other parameters, you need to show how close your prediction is to the observed reality.

I will not write about (adjusted) R-squared, F-test and other statistical measures. Instead, I want to focus on performance metrics that should represent more intuitive concept of performance as I believe they can help you to sell your work much more. These are:

  • mean absolute error
  • median absolute deviation
  • root mean squared error
  • mean absolute percentage error
  • mean percentage error

Continue reading

Optimizing Python code performance with cProfile

There are many situations where we find that our code runs too slow and we don’t know the apparent reason. For such situations it comes very handy to use the python cProfile module. The module enables us to see the time individual steps in our code take, as well as the number of times certain functions are being called. In the following paragraphs, I will explore it’s capabilities a bit more.

However first let’s remember the quote by Donald Knuth: “premature optimization is the root of all evil (or at least most of it) in programming”. So make sure that you don’t start optimizing before you even have a working code! In many cases you will not be able to determine the bottlenecks beforehand and might spend a lot of extra effort in the wrong places.

Profiling with cProfile

The easiest way of using the cProfile module from within a Python script can look as follows

import cProfile

pr = cProfile.Profile()
pr.enable()

call_function()

pr.disable()

pr.print_stats(sort='time')

In the code we create a Profile object and enable it, then execute the code that is of interest to us, disable the profiling and view the results.

Continue reading

8 simple ways how to boost your coding skills (not just) in R

Our world is generating more and more data, which people and businesses want to turn into something useful. This naturally attracts many data scientists – or sometimes called data analysts, data miners, and many other fancier names – who aim to help with this extraction of information from data.

A lot of data scientists around me graduated in statistics, mathematics, physics or biology. During their studies they focused on individual modelling techniques or nice visualizations for the papers they wrote. Nobody had ever taken a proper computer science course that would help them tame the programming language completely and allow them to produce a nice and professional code that is easy to read, can be re-used, runs fast and with reasonable memory requirements, is easy to collaborate on and most importantly gives reliable results.

I am no exception to this. During my studies we used R and Matlab to get a hands-on experience with various machine learning techniques. We obviously focused on choosing the best model, tuning its parameters, solving for violated model assumptions and other rather theoretical concepts. So when I started my professional career I had to learn how to deal with imperfect input data, how to create a script that can run daily, how to fit the best model and store a predictions in a database. Or even to use them directly in some online client facing point.

To do this I took the standard path. Reading books, papers, blogs, trying new stuff working on hobby projects, googling, stack-overflowing and asking colleagues. But again mainly focusing on overcoming small ad hoc problems.

Luckily for me, I’ve met a few smart computer scientists on the way who showed me how to develop code that is more professional. Or at least less amateurish. What follows is a list of the most important points I had to learn since I left the university. These points allowed me to work on more complex problems both theoretically and technically. I must admit that making your coding skills better is a never ending story that restarts with every new project.

Continue reading

Analytical Market Segmentation with t-SNE and Clustering Pipeline

Irrespective of whether the underlying data comes from e-shop customers, your clients, small businesses or both large profit and non-profit organizations, market segmentation analysis always brings valuable insights and helps you to leverage otherwise hidden information in your favor, for example greater sales. Therefore, it is vitally important to utilize an efficient analytical pipeline, which would not only help you understand your customer base, but also further serve you during planning of your tailored offers, advertising, promos or strategy.  Let us play with some advanced analytics in order to provide a simple example of efficiency improvement when using segmentation techniques, namely clusteringprojection pursuit and t-SNE.

As your goal might be improving your sales through tailored customer contact, you need to discover homogeneous groups of people. The different groups of customers behave and respond differently, therefore it is only natural to treat them in a different way. The idea is to get greater profit in each segment separately, through diverse strategy. Thus, we need to accomplish two fundamental tasks:

  1. identify homogeneous market segments (i.e. which people are in which group)
  2. identify important features (i.e. what is decisive for customer behavior)

In this post, I am focusing on the first problem from the technical point of view, using some advanced analytic methods. For the sake of brief demonstration, I will work with simple dataset, describing the annual spending of clients of a wholesale distributor on diverse product categories. Following the figure below, it would be difficult to detect some well separated clusters of clients at the first sight.

scatter_all

Continue reading

Advanced analytics with Python and Tableau 10.1 integration

After introducing R capabilities in Tableau 8.1, the new Tableau 10.1 now comes also with support for Python. This is a great news especially for data scientists, who use the reports to visualize results of some more sophisticated analytical processes. Such reports can now bring the analytics much closer to the end users, while preserving the given level of user-friendliness.

tableau_python

In this post I am using a simple modelling example to describe how exactly the integration of Tableau and Python works.

Continue reading