In our ongoing work with customer data, we carry around a constantly growing mountain of data from the past. Because we use AI learning processes, the amount of data from which we can learn also increases. Because we can do it, we also do it because we assume that more information means more knowledge; and doesn’t ‘more knowledge’ lead to better action?
Recently our algorithms have not been learning so well. The problem is not so much their performance, but that the speed of adjustment is slowing down. Something must be wrong. But on closer inspection, we see that the growing amount of data not only makes us technically, but also logically, slower. We are becoming more and more conservative. Historical data is growing and it influences the logic in its own way. Our algorithms only consider this remarkable when current developments among our customers lead to long-lasting changes.
Do we want that? No. What is the reason? Let’s try it out. So, if we exclude 80% of historical data from our artificial learning, the algorithms change dramatically. We realize that we have to decide which data is important and which data we need for our business model. We can also decide which business model is best supported by our data. However, growth in knowledge only helps to a certain point. What cracks the nut in the end is, above all, the right “forgetting”.