Technology

Microsoft unveils ‘trustworthy AI’ features to fix hallucinations and boost…


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Microsoft unveiled a suite of new artificial intelligence safety features on Tuesday, aiming to address growing concerns about AI security, privacy, and reliability. The tech giant is branding this initiative as “Trustworthy AI,” signaling a push towards more responsible development and deployment of AI technologies.

The announcement comes as businesses and organizations increasingly adopt AI solutions, bringing both opportunities and challenges. Microsoft’s new offerings include confidential inferencing for its Azure OpenAI Service, enhanced GPU security, and improved tools for evaluating AI outputs.

“To make AI trustworthy, there are many, many things that you need to do, from core research innovation to this last mile engineering,” said Sarah Bird, a senior leader in Microsoft’s AI efforts, in an interview with VentureBeat. “We’re still really in the early days of this work.”

Combating AI hallucinations: Microsoft’s new correction feature

One of the key features introduced is a “Correction” capability in Azure AI Content Safety. This tool aims to address the problem of AI hallucinations — instances where AI models generate false or misleading information. “When we detect there’s a mismatch between the grounding context and the response… we give that information back to the AI system,” Bird explained. “With that additional information, it’s usually able to do better the second try.”

Microsoft is also expanding its efforts in “embedded content safety,” allowing AI safety checks to run directly on devices, even when offline. This feature is particularly relevant for applications like Microsoft’s Copilot for PC, which integrates AI capabilities directly into the operating system.

“Bringing safety to where the AI is is something that is just incredibly important to make this actually work in practice,” Bird noted.

Balancing innovation and responsibility in AI development

The company’s push for trustworthy AI reflects a growing industry awareness of the potential risks associated with advanced AI systems. It also positions Microsoft as a leader in responsible AI development, potentially giving it an edge in the competitive cloud computing and AI services market.

However, implementing these safety features isn’t without challenges. When asked about performance impacts, Bird acknowledged the complexity: “There is a lot of work we have to do in integration to make the latency make sense… in streaming applications.”

Microsoft’s approach appears to be resonating with some high-profile clients. The company highlighted collaborations with the New York City Department of Education and the South Australia Department of Education, which are using Azure AI Content Safety to create appropriate AI-powered educational tools.

For businesses and organizations looking to implement AI solutions, Microsoft’s new features offer additional safeguards. However, they also highlight the increasing complexity of deploying AI responsibly, suggesting that the era of easy, plug-and-play AI may be giving way to more nuanced, security-focused implementations.

The future of AI safety: Setting new industry standards

As the AI landscape continues to evolve rapidly, Microsoft’s latest announcements underscore the ongoing tension between innovation and responsible development. “There isn’t just one quick fix,” Bird emphasized. “Everyone has a role to play in it.”

Industry analysts suggest that Microsoft’s focus on AI safety could set a new standard for the tech industry. As concerns about AI ethics and security continue to grow, companies that can demonstrate a commitment to responsible AI development may gain a competitive advantage.

However, some experts caution that while these new features are a step in the right direction, they are not a panacea for all AI-related concerns. The rapid pace of AI advancement means that new challenges are likely to emerge, requiring ongoing vigilance and innovation in the field of AI safety.

As businesses and policymakers grapple with the implications of widespread AI adoption, Microsoft’s “Trustworthy AI” initiative represents a significant effort to address these concerns. Whether it will be enough to allay all fears about AI safety remains to be seen, but it’s clear that major tech players are taking the issue seriously.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *