8 simple ways how to boost your coding skills (not just) in R

Our world is generating more and more data, which people and businesses want to turn into something useful. This naturally attracts many data scientists – or sometimes called data analysts, data miners, and many other fancier names – who aim to help with this extraction of information from data.

A lot of data scientists around me graduated in statistics, mathematics, physics or biology. During their studies they focused on individual modelling techniques or nice visualizations for the papers they wrote. Nobody had ever taken a proper computer science course that would help them tame the programming language completely and allow them to produce a nice and professional code that is easy to read, can be re-used, runs fast and with reasonable memory requirements, is easy to collaborate on and most importantly gives reliable results.

I am no exception to this. During my studies we used R and Matlab to get a hands-on experience with various machine learning techniques. We obviously focused on choosing the best model, tuning its parameters, solving for violated model assumptions and other rather theoretical concepts. So when I started my professional career I had to learn how to deal with imperfect input data, how to create a script that can run daily, how to fit the best model and store a predictions in a database. Or even to use them directly in some online client facing point.

To do this I took the standard path. Reading books, papers, blogs, trying new stuff working on hobby projects, googling, stack-overflowing and asking colleagues. But again mainly focusing on overcoming small ad hoc problems.

Luckily for me, I’ve met a few smart computer scientists on the way who showed me how to develop code that is more professional. Or at least less amateurish. What follows is a list of the most important points I had to learn since I left the university. These points allowed me to work on more complex problems both theoretically and technically. I must admit that making your coding skills better is a never ending story that restarts with every new project.

Continue reading

Advertisements

Analytical Market Segmentation with t-SNE and Clustering Pipeline

Irrespective of whether the underlying data comes from e-shop customers, your clients, small businesses or both large profit and non-profit organizations, market segmentation analysis always brings valuable insights and helps you to leverage otherwise hidden information in your favor, for example greater sales. Therefore, it is vitally important to utilize an efficient analytical pipeline, which would not only help you understand your customer base, but also further serve you during planning of your tailored offers, advertising, promos or strategy.  Let us play with some advanced analytics in order to provide a simple example of efficiency improvement when using segmentation techniques, namely clusteringprojection pursuit and t-SNE.

As your goal might be improving your sales through tailored customer contact, you need to discover homogeneous groups of people. The different groups of customers behave and respond differently, therefore it is only natural to treat them in a different way. The idea is to get greater profit in each segment separately, through diverse strategy. Thus, we need to accomplish two fundamental tasks:

  1. identify homogeneous market segments (i.e. which people are in which group)
  2. identify important features (i.e. what is decisive for customer behavior)

In this post, I am focusing on the first problem from the technical point of view, using some advanced analytic methods. For the sake of brief demonstration, I will work with simple dataset, describing the annual spending of clients of a wholesale distributor on diverse product categories. Following the figure below, it would be difficult to detect some well separated clusters of clients at the first sight.

scatter_all

Continue reading

Propensity modelling and how it is relevant for modern marketing

In the last few years the obvious fact that for successful marketing you need to “contact the right customers with the right offer through the right channel at the right time” has become something of a mantra. While there is nothing to disagree here, it is a pity that for most part the saying stays in words and only gets realized in rare cases. The issue is that while many can repeat the mantra, only few actually know what is needed to put it in practice.  In this post, I am going to talk about the first part – how to target the right customers for your marketing actions?

There are many approaches to solving this great puzzle. One of the extreme solutions is having a team of marketing experts who rely solely on their gut feeling, projecting their opinions on customers, without any proof, not even evaluating or testing the campaigns. Because that’s what they did in their previous job. It might sound ridiculous in today’s digital era, but surprisingly it is often the case.

right_customers

The other extreme is building complex AI engines and let them make all the decisions. This is typically a proposition by some geeky start-up run by fresh PhD holders. This approach is in my opinion also wrong. First, you have absolutely no assurance that the data available truly reflect the reality, that the algorithm works flawlessly or simply that the randomness in the world is not too strong to predict. After all, even companies running algorithmic trading have human dealers overseeing their algorithms, who focus on addressing weaknesses of the algorithms and generally on preventing internal disasters.

As always, I think that the solution lies somewhere in between. An experienced marketer, whose opinion is backed by information extracted from the data available, can truly hit it. Imagine that you have to run a campaign to increase sales of a saving account (or a road bike, new robot, a holiday in Caribbean…). The long proven data extraction technique one should consider is called propensity to buy (or to purchase or to use).

Continue reading

Advanced analytics with Python and Tableau 10.1 integration

After introducing R capabilities in Tableau 8.1, the new Tableau 10.1 now comes also with support for Python. This is a great news especially for data scientists, who use the reports to visualize results of some more sophisticated analytical processes. Such reports can now bring the analytics much closer to the end users, while preserving the given level of user-friendliness.

tableau_python

In this post I am using a simple modelling example to describe how exactly the integration of Tableau and Python works.

Continue reading

How to plot your own bike/jogging route using Python and Google Maps API

Apart from being a data scientist, I also spend a lot of time on my bike. It is therefore no surprise that I am a huge fan of all kinds of wearable devices. Lots of the times though, I get quite frustrated with the data processing and data visualization software that major providers of wearable devices offer. That’s why I have been trying to take things to my own hands. Recently I have started to play around with plotting my bike route from Python using Google Maps API. My novice’s guide to all this follows in the post.

strava_map

Continue reading

Essentials of working with Python cloud (Ubuntu)

In most of the data science applications, it comes very handy to be able to run code on the cloud. Be it a simple demonstration of a functionality that we want to make accessible for a potential client or an end-to-end implementation of let’s say a predictive model, the accessibility of cloud-based solutions is a definitive asset. However, running code on the cloud does have its pitfalls, which can discourage many from taking advantage of it.

This is why I have decided to share our experience with working on the cloud. In this post, I will specifically give a summary of functionalities that can help to run a python script on the Ubuntu cloud.python_on_cloud

Running a python script on the cloud, can become much more bothersome than the development on our local computer, especially if we are using a standard SSH connection. Fortunately, to make our lives easier, there are a couple of functionalities that we can use.

1. argparse (python) – to run the script with various input arguments
2. tmux (unix) – to run sessions without the need to have a permanent SSH connection
3. cron (unix) – to run the scripts with a predefined frequency
4. SimpleHTTPs (python) – lightweight webserver for providing access to files to users that don’t have access to our cloud

Continue reading

Data Science in Marketing – An Efficient Alternative to Attribution Modelling

Driving marketing budget sometimes seems to be a mysterious art where decisions are based on ideas of few enlightened people, who know what’s right. But you should not fool yourself, the times are changing and so is the way successful marketing is managed. The same as in other fields, experienced marketing managers use information hidden in data to help them. With the amount of data and methods available, it is however often tricky not to get lost and be able to distinguish the signal from the noise. Typical examples are the marketing attribution models – a tool that is widely used, but in my experience rarely maximizes the leveraged value of data. 

Typically, in marketing attribution, marketers want to know, which part of the business KPI (typically site visits, sales, new customers, new revenues etc.) result from which marketing activity. Mainstream approach is to use attribution models that are often very simplistic – like single source attribution (last click, first click) or fractional attribution (where the contribution is distributed among multiple touch points given some simple rule). These methods provide marketers with the importance of each marketing channel or campaign in respect to their KPI. Based on this historical information the marketing managers make a decision about how to allocate the marketing budget. This approach however puts a great deal of pressure to tedious and demanding data detective work to make sure all client touch points are measured correctly. More importantly, there is no way of knowing that this work has been done correctly, which of course has significant impact on credibility of the attribution models.

Knowing these difficulties, we decided for an alternative approach. We thought: Why should we dig into the individual touch points? Shouldn’t we rather focus on marketing investments and model the ultimate business output? And that is exactly what we did. We took investments into individual marketing channels in time and used time series analysis to predict our client’s business goal (number of sales). On top of it, we also added seasonality, marketing investment of competitors and some other simple parameters.


“Even though we are using data to drive marketing decisions on a daily basis, most of the tools that we have used up until now focus on describing the past. Recently we decided to work together with aLook Analytics to change that. Thanks to their modelling approach to marketing investments we now have accurate information about the expected future developments as well.

Using the interactive Shiny application that is built in Keboola Connection, we want to make informed decisions on the fly, which will help us to reach our sales goals in the most cost efficient way.”

Daniel Gorol, BNP Paribas Personal Finance SA / Cetelem


Continue reading