Cryptocurrency exchange ShapeShift founder Erik Voorhees announced on Friday the public launch of his latest venture, Venice AI, a privacy-focused generative AI chatbot.
Privacy is a key concern among users in the cryptocurrency space and artificial intelligence, and was an important factor in the creation of Venice AI, he said.
“I see where AI is going, and it’s going to be captured by the big tech companies that are working with the government,” Voorhees said. decryption. “That really worried me. It made me realize how powerful AI is and how critical it is. It’s an amazing area of new technology.”
Big tech companies, often controlled by governments, act as gatekeepers to AI, he lamented, and could lead us to a dystopian world.
“The antidote to this is open source decentralization,” Voorhees said. “I’m not giving anyone exclusive rights to this stuff.”
Voorhees acknowledged the important work OpenAI, Anthropic and Google have done in advancing the field of generative AI, but said consumers should still have the option to use open source AI.
“I don’t want that to be the only option. I don’t want closed source, monopoly, centralization, censorship, permissions being the only options,” he said. “So there has to be an alternative.”
Voorhees launched the ShapeShift cryptocurrency exchange in 2014. In July 2021, the exchange announced that it would transition to an open-source decentralized exchange (DEX), transferring control of the exchange from Voorhees to ShapeShift DAO.
ShapeShift announced in March that it would be shutting down after becoming embroiled in a battle with the Securities and Exchange Commission (SEC). The exchange agreed to pay a $275,000 fine and comply with a cease-and-desist order to settle claims that it allowed users to trade digital assets without registering as a broker or exchange with an institution.
In the intervening three years, Voorhees said he turned his attention to building permissionless, decentralized AI models.
Voorhees said that Venice AI does not store user data and cannot see user conversations, explaining that Venice AI sends a user’s text input through an encrypted proxy server to a distributed GPU running the AI model, which then sends the answer back.
“His bottom line is for security,” Voorhees said.
“(The GPU) can see the plain text of certain prompts, but not all other conversations, and Venice can’t see your conversations, and none of them are tied to your identity,” he said.
Voorhees acknowledged that the system does not provide perfect privacy. It’s not completely anonymous and it’s not completely devoid of knowledge. However, it expressed the view that Venice AI’s model is “substantially better” than the status quo, where conversations are sent and stored by centralized companies.
“They see everything, they own it forever, and they tie it to your identity,” Voorhees said.
AI developers such as Microsoft, Google, Anthropic, OpenAI, and Meta have been working hard to improve public and policymaker awareness of the generative AI industry. Several top AI companies have signed on to government and non-profit initiatives and pledged to develop “responsible AI.”
While these services ostensibly allow users to delete their chat history, Voorhees says it’s naive to assume the data is gone forever.
“If a company has information, you can’t trust that information to be gone,” he said, noting that some government regulations require companies to retain customer information. “People should assume that everything they write on OpenAI is passed on to them and they can own it forever.”
“The only way to solve this problem is to use a service that never moves the information to a central repository in the first place,” Voorhees added. “That’s what we were trying to create.”
On the Venice AI platform, chat history is stored locally in the user’s browser and can be deleted regardless of whether the user creates an account or not. Customers can set up an account using their Apple ID, Gmail, email, Discord, or by linking their MetaMask wallet.
However, creating a Venice AI account gives you the benefits of higher message limits, modified prompts, earning points, and more. However, points don’t offer any functionality other than making it easier to track your current usage. Users who want a more even set of features can also pay for a Venice Pro account, currently priced at $49 per year.
Venice Pro offers unlimited text prompts, removes watermarks from generated images and document uploads, and allows users to “turn off safe mode for uninterrupted image creation.”
Have fun with https://t.co/m2jsJuDuXS
In Venice (using a Pro account) you can modify the “System Prompt”. This is basically like God mode or root access with an interactive LLM.
It can enable interesting viewpoints that regular AI services might censor. pic.twitter.com/qlt0xp0aC9
— Eric Voorhees (@ErikVoorhees) May 10, 2024
Despite the MetaMask account integration, Voorhees noted that users cannot yet subscribe to Venice Pro with digital currency, but said it’s “coming soon.” Meanwhile, because it is built on top of the Morpheus network, the company is rewarding Morpheus token holders.
“If you have one Morpheus token in your wallet, you can get unlimited free pro accounts,” he said. “You don’t even have to pay, just hold one Morpheus token and you will automatically have a Pro account as long as that token is in your wallet.”
As with other tools, cybercriminals continue to develop ways to commit crimes by bypassing the guardrails built into AI tools, such as using ambiguous language or creating pirated copies of popular AI models. However, according to Voorhees, interacting with a language calculator is by no means illegal.
“If you Google ‘How do you make a bomb?’ You can go look for that information. “It’s not illegal to seek that information, and I don’t think it’s unethical to seek that information,” he said. “What is illegal and unethical is making bombs to harm people, and that has nothing to do with Google.
“This is a separate action taken by the user. So I think similar principles apply to Venice in particular and AI in general,” he said.
Generative AI models, such as OpenAI’s ChatGPT, have also come under increased scrutiny over how AI models are trained, where data is stored, and privacy concerns. Venice AI collects limited information, such as how you use its products (such as creating new chats), but its website says the platform cannot view or store “data about text or image prompts shared between users and AI models.”
For text generation, Venice uses the Llama 3 large language model developed by Facebook parent company Meta. Customers can also switch between two versions of Llama 3: Nous H2P and Dolphin 2.9.
Speaking on Twitter Spaces following the launch of Venice AI, Voorhees praised the work Mark Zuckerberg and Meta have done in generative AI, including creating a powerful LLM open source.
“Meta deserves tremendous credit for essentially spending hundreds of millions of dollars to train cutting-edge models and release them to the world for free,” he said.
Venice also allows users to generate images using the open source models Playground v2.5, Stable Diffusion XL 1.0, and Segmind Stable Diffusion 1B.
When asked whether Venice AI would use services from OpenAI or Anthropic, Voorhees’ answer was a resounding no.
“We will never offer Claude LLM and we will never offer OpenAI’s services,” he said. “We are not a wrapper for a centralized service, it is just an explicit way to access the open source model.”
Voorhees acknowledged that there are concerns about the performance of Venice AI as it is built on top of the decentralized Morpheus network, which supports open source smart agents. That’s what they’re focusing on, he explained.
“If you want to provide people with uncensored, private AI, it needs to be as good as a centralized enterprise,” Voorhees said. “Because otherwise people will prefer the convenience of a central company.”
Edited by Ryan Ozawa.