It’s still the recall of lost users (withholding off) mentioned above. The recall experiment of manual and AI shows that the recall efficiency of AI is higher than that of manual labor. After seeing this unexpected result, I remind the corresponding operation students to listen to it. Record the phone call to find out the cause and effect.
The student did not act immediately, and the data also showed that the performance of AI was better than that of human workers in company email list the experiment of giving away rights.
This time he followed the advice and listened to the recording, and the case was solved immediately.
It turns out that because the recall efficiency is not high, the benefits obtained by the agents are limited, so the artificial agents are not very motivated when making calls, so they are not as effective as standard AI.
What about using blue-chip agents to call churn recall strategies?
If we don't understand the causality behind the correlation, we may draw a wrong conclusion.
In fact, this is a point of criticism for many superstitious AB test teams, which only pay attention to correlation and do not delve into causality.
I have heard a story that Toutiao found through experiments in a certain country that using purple visual color system will lead to better user retention, but what is the causal relationship? No one knows, no one knows.
Set experimental goals
This is something I have always emphasized, but it is actually a point that many people ignore. They think that an experiment is nothing company email list more than an experimental result and the final data to speak; so in practice, just design the experimental method and not Set experimental goals, but it's actually important.
Take Zhang Xiaolong's speech on "WeChat 10th Anniversary" as an example:
In June, when the new version of social recommendation was still under development, I wrote an assertion on the blackboard: one day in the future, the consumption ratio of video playback, following, friend recommendation, and company email list machine recommendation should be 1:2 :10. That is, a person should watch 10 videos on average, 20 videos liked by friends, and 100 videos recommended by the system.
It was explained at the time:
There are two kinds of content, one is intellectual information that you need to spend your brain to understand, which is learning; the other is consumer information that does not require your brain to understand the comfort zone of thinking, which is entertainment. Friend like is a friend forcing you to obtain knowledge information that you may not be interested in, which belongs to the learning category; machine recommendation refers to the system that favors you and allows you to browse the consumer information you like comfortably, which belongs to the entertainment category. There are two kinds of information in the attention.
Because you already know what it is about to focus on, it will not be too attractive, so it is 1. Friend Zan looks tired, but it can't be missed, so it's 2. The system recommendation, in line with the principle of lazy people, is the information that most people can consume more easily and gain a sense of comfort, so it is 10.
But our current market data is not this ratio. Now the overall vv generated by friend likes is twice that of machine recommendations.