Yandex recently released a new tool «Advanced analytics of search queries by URL (β)».
If you dig into this tool and ponder why it is needed, the answer is banal - Yandex sells data for large companies and makes for them a convenient bundle of «URL ↔️ query ↔️ date ↔️ region» with the ability to look into the distant past.
I realized this after some time of thinking about the question «why this tool is needed and how it differs from monitoring positions and query statistics». Going to the comments to the post in the Yandex.Webmaster blog, I didn't find an answer there. Here is a person in the comments of the Yandex.Webmaster blog asking the same question: https://skr.sh/sYJesr8gZcf
But for small and medium-sized businesses, it's like he doesn't give a shit.
Let's break down a little bit the scenarios that Yandex gives for using this tool:
- Gather relevant semantics for pages or website. But I don't understand. If you need the whole core, you can unload it from «Query Statistics», if you need a category or specific pages, you can unload it through «Query Monitoring». Yes, there are time limitations there. But in «Advanced analytics of search queries by URL (β)». 1 day = 1 limit. And you can't query by prefix there. Even if you have 100 URLs and need to collect semantics in 365 days, that's 36,500 limits. On a free account, that's 365 days of sitting there, collecting all that core.
- Cannibalization of requests. It is suggested to give some pages to see if there is any overlap in semantics between them. But how is this different from unloading those pages from position monitoring and seeing if there is any semantics overlap?
If you look at it soberly, you can find some usage scenarios. And a couple of times a year a small and medium-sized business may really need it.