Journalism professor turns NYC chatbot failure into success

Journalism professor turns NYC chatbot failure into success

Using a recently failed New York City (NYC) chatbot pilot as inspiration a data journalism professor created his own version in minutes which was successful, reports The Markup.

The chatbot pilot was supposed to be a useful tool for entrepreneurs instead, it turned into a cautionary tale about the importance of oversight and responsible use of AI.

Chatbot NYC, hosted at, is designed to provide clear guidance to individuals navigating the complexities of starting a business. The AI-powered interface aims to streamline the process by offering advice on permits, regulatory compliance and other business needs.

However, the report revealed a pattern of misinformation. Chatbots, if followed, can cause users to unwittingly break the law. Examples include incorrect advice about tip allocations, housing discrimination and cash acceptance policies.

Mayor Eric Adams’ response, acknowledging the boat’s shortcomings while defending its potential for improvement.

Jonathan Soma, a professor of data journalism at Columbia University saw the story. Using the NYC chatbot as a starting point, he shows in a video how to build a similar AI-powered chatbot that can scan and answer questions based on uploaded documents.

Jonathan Soma provides insight into the technical aspects of chatbot development. His reaction to the findings reflects widespread concern about the trustworthiness of AI, especially in contexts where legal consequences are at stake. “I would say that there is always the ability of AI to make things up and hallucinate. Also, if you have a very large set of documents, it can be difficult to find documents that are really relevant,” he added.

Soma’s own attempt at building a working chatbot, which outperformed the NYC bot in accuracy, underscores the challenges of deploying AI. Although AI tools offer great potential, ensuring their accuracy and reliability remains a challenging task.

At that point, Soma said, “It’s 100% guaranteed that, at some point, there’s going to be some kind of mistake in that chain, and there’s going to be some kind of error introduced, and you’re going to get the wrong answer.”

Soma’s discussion goes beyond technical aspects to ethical considerations in the use of AI, especially in journalism. He emphasized the inevitable errors in AI-generated content and the critical role of human oversight. Chatbots, while useful for low-level tasks, are not suitable for providing legal or professional advice without a strict authentication mechanism, “You should use them for tasks that are okay if there are errors.”

The evolving role of AI in data journalism is a key theme, with Soma highlighting AI’s capacity to scale tasks and generate insights. However, he warned against over-reliance, advocating a balanced approach that integrates AI capabilities with human judgment and fact-checking protocols, “But you can’t do that when you build a chatbot and every conversation needs to be meaningful and have precision for the person using it.”

He added, “I think it’s the chatbots explicitly that’s probably the most problematic part because they’re so confident in everything they say.”

About Kepala Bergetar

Kepala Bergetar Kbergetar Live dfm2u Melayu Tonton dan Download Video Drama, Rindu Awak Separuh Nyawa, Pencuri Movie, Layan Drama Online.

Leave a Reply

Your email address will not be published. Required fields are marked *