How Poor AI Content Moderation Killed Astrotalk’s $100M Funding Deal

The hidden danger inside conversational platforms that India can’t ignore

Share

Breaking News : Duringā€‹ā€ā€‹ā€Œā€ā€‹ā€ā€Œ the due diligence check a 100 million dollar investment went up in smoke because of the discovery of NSFW content. This was caused due to failure of Astrotalk’s transaction with Hornbill Capital. It demonstrates the whole world that AI content moderation is no longer only a technical issue, but rather an investment ā€‹ā€ā€‹ā€Œā€ā€‹ā€ā€Œdealbreaker.

Theā€‹ā€ā€‹ā€Œā€ā€‹ā€ā€Œ Deal That Fell Apart

If you’re an Indian, you most probably have heard of Astrotalk, it was a complete winner of a company. In the last few years it had gained so much traction- you could see influencers be part of its advertisements and what not. The Indian astrology platform of eight years was worth $300 million in 2023. By the fiscal year 2025, the revenue was up to ₹1,200 crore (roughly $145 million). The increase in the figure just shows how much the company was growing. There were talks about how the company increased its profits from ₹100 crore to ₹250 crore.

Hornbill Capital, which is an India-focused hedge fund, was willing to put in $100-120 million for a valuation between $1-1.2 billion. The talks were deep into the advanced stages. Everything was as good as agreed.

So, What Went Wrong?

Due to diligence, there were some NSFW videos and chat conversations that were found and these were spread all over the platform. If you don’t know hat exactly NSFW is I’ll tell you. NSFW stands for “Not Safe For Work”. A source close to the negotiations said: “Hornbill was quite uncomfortable with the Not Safe for Work content, such as videos and chats. That were basically found during the diligence stage. The truth is that no investor is willing to take on such a kind of risk.”

The deal is dead as a doornail. Furthermore, this is not due to bad revenue, poor traction, or competition. It was primary because of AI content moderation failures that the deal fell apart.

Astrotalk’s co-founder Anmol Jain asserted his innocence and said that they never signed a term sheet and blamed the valuation mismatches for the situation. Nonetheless, everyone knows by now that the harm was already done.

Whyā€‹ā€ā€‹ā€Œā€ā€‹ā€ā€Œ AI Content Moderation Scares Investors

Social media platforms could afford to take it easy with content moderation, leveraging the laws that protected them from any legal issues arising from user generated content. As long as the content was posted by the users, platforms were not liable. That is changing quite rapidly.

The lawsuits of late signal the stakes. Recently, Google’s Gemma model fabricated a scene where Senator Marsha Blackburn sexually assaulted someone and made up false allegations. Investors are now asking the question: in the case of false accusations made by a user on your platform, who is the one responsible?

Meanwhile, the Indian government is not sitting still either. In October 2025, new amendments were introduced that required platforms to not only verify but also supervise AI-generated content. Every single rule adds more to the compliance costs and makes them more unpredictable. Both these things are hated by ā€‹ā€ā€‹ā€Œā€ā€‹ā€ā€Œinvestors.

Whyā€‹ā€ā€‹ā€Œā€ā€‹ā€ā€Œ It’s Extremely Difficult to Perform AI Content Moderation at Scale

Companiesā€‹ā€ā€‹ā€Œā€ā€‹ā€ā€Œ Getting It Right

Conversational platforms have issues that cannot be resolved by mere filtering:

  • Context Understanding Fails: Artificial Intelligence is confused by sarcasm, cultural nuances, and ambiguous language. For example, the same inherently polite statement might be considered rude in different cultures. Language-diversified India with 22 official languages and more than 1,600 dialects makes the problem go up exponentially.
  • Bad Actors Adapt Fast: People continually come up with new ways to evade automated detection by using coded language, new slang, or references that are only understandable in certain contexts. Eventually, there is always a lag between the strategies and the training data.
  • Bias Issues: The AI content moderation algorithms created for the given task may acquire biases from the training data in such a way that they may more frequently identify the content of minority groups as offensive, while at the same time, they may fail to recognize the content that is harmful and directed towards them.
  • False Positives and Negatives: The systems are working incorrectly most times. For example, on the one side, innocent conversations are getting flagged; meanwhile, truly harmful content is managing to go unnoticed. Therefore, users are getting annoyed and platforms are still vulnerable.

Theā€‹ā€ā€‹ā€Œā€ā€‹ā€ā€Œ Opportunity: Safe, Local-Language AI

Without a doubt, this AI content moderation avenue reveals a huge potential for a proper AI content moderation by AI technology, especially in terms of local languages. Withā€‹ā€ā€‹ā€Œā€ā€‹ā€ā€Œ the increase in need for conversational bots that are safe and culturally sensitive, a few platforms have responded by developing AI friends that are more regulated and privacy-first. Candy AI offers tightly regulated and customizable user interactions, reflecting a trend the Indian market is rapidly adopting. You can directly compare how these models approach safer user experiences.

The Language Gap

Here is the issue for majority of the global AI chatbots: they are largely English-based. As a result, for a non-English speaking person in India, a conversational AI usage means a language other than their own. The gap here is very wide.

According to one report, he conversational AI market in India is set to rise from ₹38.10 billion in 2024 to ₹152.31 billion in 2030, the annual growth rate being 26.22 %. In another forecast, the market value is $7.8 billion for 2025 and $34.6 billion for 2031. That means in six years, the market will be thrice as large.

What Wins Investor Trust

Platforms which get to the finishing line will create their distinctions through their deliberate architecture:

  1. Transparent AI Content Moderation Policies: Clear community rules especially made for local cultural norms. A bot in Marathi will be functioning under the cultural expectations of Marathi, whereas a Tamil bot will be the reflection of the norms of ā€‹ā€ā€‹ā€Œā€ā€‹ā€ā€ŒTamil.
  2. True Language Mastery: The service can use the language that it is dealing with and understand idioms and slang, even if they are from the culture or are references, and also take note of that culture-specific offensive language patterns. Here, we are not talking about just mere translation.
  3. Human + AI Hybrid Systems: AI can only identify situations where there could be a problem, but it is up to human moderators from the community in question to check the content in question. So, the company decides in favor of accuracy instead of efficiency in this case.
  4. Focused Use Cases: Instead of controlling everything, select certain high-value use cases such as mental health support, financial literacy, or education, and concentrate on them. Investors agree that having a narrow focus allows for rigorous AI content moderation.
  5. Radical Transparency: Put limitations upfront and prominently. Thus, the system avoids the hallucination problems that have been its predecessors’ ā€‹ā€ā€‹ā€Œā€ā€‹ā€ā€Œdownfall.

What Investors Now Demand

Every investor, who is a keen eye on conversational AI space, poses these questions upfront:

  • How do you manage AI content moderation at large scale while still being resource-efficient?
  • What are the measures that you have in place to tackle defamatory or criminal content generated by users?
  • In what way your systems integrate regional languages and cultural norms?
  • What is your compliance plan when the regulations change?
  • Is there a way for you to demonstrate that your AI content moderation lowers the number of false positives and negatives?

Astrotalk was not able to provide convincing answers to these questions. The deal fell through at a valuation of $100 ā€‹ā€ā€‹ā€Œā€ā€‹ā€ā€Œmillion.

Read more

Recommended For You