OpenAI - Experts & Thought Leaders

Latest OpenAI news & announcements

FARx Innovates Against AI-Powered Voice Cloning

FARx, the world’s only fused-biometrics company, has launched the latest version of its software to help organizations stay ahead of AI-powered voice fraud. Advanced text-to-speech and voice cloning tools can now mimic human speech so convincingly that they are indistinguishable to the human listener, and legacy voice biometrics designed to authenticate human voices are unable to detect the difference.  Recent data reveals a sharp rise in AI-related fraud, with 35% of UK businesses targeted by these attacks, compared to 23% in 2024. This surge is driven by increasingly sophisticated tactics, including social engineering and identity theft, deepfakes, voice cloning and synthetic identities. All threats that traditional multi-factor authentication and older voice biometrics solutions cannot reliably combat.  FARx’s next generation of biometrics software In fact, OpenAI CEO Sam Altman recently warned that AI has “fully defeated” voice biometric authentication, calling it a “crazy” choice for financial institutions relying on it for security. Instead, he stressed that new verification methods are essential to protect against this next wave of fraud.   FARx’s next generation of biometrics software, which fuses speaker, speech and face recognition, introduces expanded capabilities for synthetic and cloned voice detection. Trained on 55,000 synthetic voices from real telephony environments, it can reliably distinguish between real and AI-generated voices.    Identity using synthetic voices Unlike traditional voice user interfaces, FARx 2.0 identifies not just what is being said but who is speaking, enabling it to detect and block attempts to spoof someone’s identity using synthetic voices, deepfakes, or cloned audio and video. This is also useful when onboarding new customers to ensure they are not fake or synthetic identities.  FARx 2.0 can be integrated seamlessly into browsers, apps and existing communications systems to deliver continuous, frictionless multi-factor authentication. Operating in the background, it learns each user to detect subtle biometric shifts, such as emotion, tone, or behavior. It can also continuously verify identity without disrupting the user experience and capture biometric data from suspected fraudsters.  Early-stage innovative startups FARx 2.0 also supports Interactive Voice Response (IVR) telephony systems, for in-call synthetic and cloned voice detection across call centers, helpdesks and services desks; as well as video conferencing platforms for real time deepfake detection.  The announcement comes just two months after FARx secured £250,000 of seed investment, aided by the Seed Enterprise Investment Scheme (SEIS) – a UK Government initiative providing tax relief to investors who fund small, early-stage innovative startups.  New era of AI-powered threats Clive Summerfield, CEO of FARx said: “This latest iteration of FARx is something we have been working on for a while now, with the aim to deliver an even more sophisticated, flexible biometric multi-factor authentication technology to users across a broad range of industries and applications. Receiving the investment through the SEIS has allowed us to do this at an even greater pace, speeding up the development and delivery of FARx 2.0 to those who need it most."  “In recent months, we have seen in real time, perhaps more than ever before, the true impact of social engineering. Data is already showing an increase in the use of AI for fraud and ID theft; as technology and AI develop, this kind of attack will only become more regular. Legacy voice biometrics and traditional MFA systems are simply no longer enough to outsmart the new era of AI-powered threats."  Real-world AI-threat scenarios Summerfield added: “Through further research and development, and the expansion of our integration capabilities, FARx 2.0 offers an even broader spectrum of security." "Not only is it built on our patented AI biometric technology – which continuously learns and knows you, becoming stronger the more it is used – it is trained on tens of thousands of real-world AI-threat scenarios like deepfakes and synthetic voices. The result is a far more tailored approach to MFA security, built to combat both current and future threat landscapes.” 

Emirates & OpenAI Advance AI Adoption In Aviation

Emirates and OpenAI have entered into a strategic collaboration to advance AI adoption and innovation across the airline. The collaboration will entail enterprise-wide deployment of ChatGPT Enterprise, supported by tailored AI literacy programs, technical exploration, and executive strategic alignment designed to embed AI capabilities across the organization. Enormous potential for AI technology Ali Serdar Yakut, Executive Vice President IT said: “We see enormous potential for AI technology to support our business requirements, helping us tackle complex commercial challenges, strengthening our operations, and enhancing the customer experience.” “Closely working with OpenAI will make our technology investments both strategic and scalable, enabling us to deliver enhanced value to our employees and customers, fundamentally changing how we innovate, deliver value, and maintain our competitive edge in the industry.” Future of aviation Rod Solaimani, Regional Director, MENA & Central Asia at OpenAI said: “Emirates Group has laid out a bold vision for how AI can transform the future of aviation. With this collaboration, we’re proud to help them bring that vision to life - embedding intelligence across their operations, empowering teams with powerful new tools, and reimagining the travel experience for millions of customers.” As part of their work together, Emirates and OpenAI will explore opportunities to introduce practical use cases, develop an internal AI champion network, and establish an AI Centre of Excellence. This collaboration will identify key areas for enhancing and expanding AI capabilities across the organization, covering critical skills, processes and technology needed to power Emirates into the next era. Early access to cutting-edge AI research Emirates stands to gain early access to cutting-edge AI research and emerging breakthroughs, as well as collaboration on government-led innovation projects and accelerators. Additionally, Emirates and OpenAI will jointly run dedicated leadership sessions to explore practical applications, build sponsorship and advocacy for AI initiatives, and provide leaders with visibility into OpenAI's product roadmap for long-term planning. OpenAI and Emirates technology teams will also work closely together to optimize integrating OpenAI’s models to establish rapid prototyping and deployment best practices and provide sandbox environments to accelerate experimentation across various use cases powered by generative AI. Emirates is committed to using technology and innovation to lead the way in aviation. The airline’s approach aims to create practical and scalable solutions that benefit travelers, communities, the wider industry, and all its brands and businesses. 

Appdome's Dynamic Defense Against Agentic AI Malware

Appdome, the pioneer in protecting mobile businesses, announces the availability of new dynamic defense plugins to detect and defend against Agentic AI Malware and unauthorized AI Assistants controlling Android & iOS devices and applications. The new Detect Agentic AI Malware plugins allow mobile brands and enterprises to know when Agentic AI applications interact with their mobile applications and use the data to prevent sensitive data leaks and block unvetted on-device AI Agents from accessing transaction, account, or enterprise data and services. Agentic AI Malware Malicious AI Assistants can exploit this access to perform data harvesting, session hijacking, and account takeovers Agentic AI Assistants—such as Apple Siri, Google Gemini, Microsoft Copilot, OpenAI ChatGPT, and others—are increasingly available to mobile users in consumer and enterprise environments. However, the same capabilities that make AI Assistants useful to consumers and employees can also be used by Agentic AI Malware and Trojans. Good and bad AI Assistants can gain broad runtime access to screen content, UI overlays, activity streams, user interactions, and contextual data. Malicious AI Assistants can exploit this access to perform data harvesting, session hijacking, and account takeovers—often under the guise of legitimate AI functionality. On Android, this risk is amplified by more permissive APIs. On iOS, threats extend to mirroring-based leaks (e.g., via AirPlay) and enterprise-targeted surveillance. Agentic AI Assistants “Mobile brands and enterprises have quickly acknowledged the risk of Agentic AI Assistants on mobile devices,” said Tom Tovar, co-creator and CEO of Appdome.   “Our new Detect Agentic AI Malware plugins give mobile brands and enterprises choice and control over when and how to introduce AI Assistant functionality to their users.”   Malicious AI Agents Agentic AI assistants have wide appeal in internal enterprise and public-facing consumer use cases. However, in consumer use cases—like banking, eWallet, and healthcare applications—some brands might take the view that, for now, the risks outweigh the benefits. Currently, whatever a good AI assistant can do, a bad AI Assistant can do. Both can access, extract or input credentials, intercept transactions, and send messages to other users. In enterprise environments, malicious AI Assistants could perform actions as the employee, accessing proprietary systems, leak sensitive documents, or create entry points for lateral compromise. Dangers of AI Apps Without real-time detection and control, mobile brands remain exposed to surveillance Wrapped or re-skinned AI apps—especially unofficial or third-party clones of tools like ChatGPT—further increase the attack footprint, often requesting dangerous (overreaching) permissions and quietly transmitting captured data to external servers. Without real-time detection and control, mobile brands remain exposed to surveillance, compliance failures, and data loss at scale. “The mobile application and device can only know it’s an Agentic AI Assistant,” said Avi Yehuda, Co-Creator and Chief Technology Officer at Appdome. "The mobile environment has no concept of “good” or “bad” actors, only allowed and disallowed access or permissions, that’s the point.” Risk of data loss Security researchers have observed that malicious AI Assistants can extract session data, cryptographic tokens, or decrypted content by analyzing on-screen information in real time. Security researchers have observed that malicious AI Assistants can extract session data These apps often masquerade as legitimate voice assistants, and once granted access, can silently monitor users’ activity. Furthermore, when coupled with generative AI models, attackers can script automated reconnaissance, tampering, or replay of sensitive operations inside apps. “If you have sensitive data or regulated use cases on mobile, AI Assistants are no longer a hypothetical risk—they’re an active one,” said Kai Kenan, VP of Cyber Research at Appdome. “Detecting and controlling the use of these tools is a must-have capability for any mobile defense strategy.” Appdome’s new Detect Agentic AI Malware Appdome’s new Detect Agentic AI Malware plugin uses behavioral biometrics to detect the techniques that malicious or unauthorized AI Assistants use to interact with an Android or iOS application in real time. This includes official, third-party, or wrapped AI apps that impersonate trusted tools or gain elevated permissions. Mobile brands and enterprises can use Appdome to monitor AI Assistant use or detect and defend against Agent AI Assistants using multiple evaluation, enforcement and mitigation options. Mobile brands and enterprises can also specify any number of Trusted AI Assistants, to guarantee that users have access to approved and legitimate Agentic AI Assistants.  new dynamic defenses “A tsunami of Agentic AI—both good and bad—is approaching the mobile ecosystem. The question is no longer if, but when,” said Chris Roeckl, Chief Product Officer at Appdome. “Most concerning are wrapped versions of legitimate apps, which are increasingly used to trick users into signing in, transacting, and engaging with what looks like your brand—until a malicious agent takes over. Our new dynamic defenses stop Agentic AI from weaponising your app against your users.”