Presearch– a decentralized and privacy-oriented online search engine– has actually simply introduced PreGPT 2.0, marking the business’s most current effort to challenge Huge Tech’s supremacy in the AI chatbot area.
The brand-new release brought enhanced language designs and a wider choice of open-source AI choices, all working on a network of dispersed computer systems instead of central information centers.
” Why am I so ecstatic? Since PreGPT 2.0 is so effective and unrestrained, that it has the possible to essentially interfere with the echo chamber impact that has actually long been controling standard knowledge, enhancing the herd impulse into blind conformity,” Brenden Tacon, development and operations lead for Presearch, informed Decrypt
![](https://tradernews.co/wp-content/uploads/2025/02/mobile.png@webp.webp)
The updated chatbot features 2 membership tiers: a $2 regular monthly standard strategy running Mistral AI’s 7B design and a $5 professional variation powered by Venice.ai’s more advanced LLMs. Both choices guarantee to keep user information personal and discussions unmonitored, with chats completely removed upon removal.
PreGPT 2.0’s model lineup functions 6 of the most widely known names in the open-source AI area: Meta’s Llama-3.1 -405 b (an enormous design), Llama-3.2 -3 b (an extremely little design constructed for effectiveness) and Llama-3.3 -70 b (its most current LLM), Alibaba’s Qwen 32b.
It even leverages the old Dolphin 2.9 design, formerly understood in AI circles for being completely uncensored and effective– and proficient at roleplay. The business likewise appears to have fine-tuned the Mistral 7B design to provide a custom-made variation.
” This design with dignity manages a context of 8,000 Tokens, which corresponds to about 5,000 words, and you will be throttled to 1000 messages each month,” according to the business’s site.
This implies the design will have a memory of 5,000 words and will not have the ability to correctly deal with discussions that surpass such limitation– or will not process triggers that are that long.
What is Presearch?
Presearch, which introduced in beta back in 2017 and went reside in 2018, is essentially a job that wishes to reimagine online search engine architecture with decentralized innovation.
The platform processed over 12 million regular monthly explore a web of independent nodes. Each node operator staked PRE tokens and provided calculating power to the network, developing a self-sufficient environment that scaled naturally with need.
The concept is that a decentralized network makes the profiling of users– Google’s company design– more difficult and might assist produce a company design that is more transparent and natural.
The platform’s marketing design is likewise various from what you see in Google or Bing, for instance.
Rather of bidding wars for keywords, marketers staked PRE tokens to acquire exposure. The more tokens they stake, the much better their positioning will be– a system that lowers token blood circulation while developing foreseeable income.
A part of these tokens get burned regularly, slowly reducing the overall supply from its existing 590 million PRE in blood circulation.
PreGPT 2.0 leveraged this dispersed facilities by teaming up with Venice.ai, a privacy-conscious AI company, and Salad.com, a neighborhood that shares decentralized GPU power.
The expert tier runs on Venice.ai’s high-performance network, while the standard strategy is supported by Salad.com’s dispersed GPU network.
Both paths secure user interactions and avoid saving chat logs, supporting Presearch’s dedication to personal privacy.
PRE’s tokenomics keeps the system running efficiently. Users make approximately 8 tokens daily for search inquiries, while node operators get benefits based upon their stake size and search volume.
This, a minimum of in theory, appears like a great deal in which both users and marketers are correctly rewarded while assisting the environment grow.
PreGPT 2.0 is a different AI function contributed to Presearch’s toolkit; the business stays concentrated on its core objective of decentralized, personal search.
The chatbot combination is planned to match the search experience without eclipsing it.
The objective is to make the whole platform perfect for privacy-conscious users who desire a replacement for standard web searches and wonder about utilizing AI tools in their daily lives.
Hands-On with PreGPT 2.0: Pledge and Limitations
Evaluating PreGPT 2.0 exposed a capable chatbot that focuses on function over flash. The user interface felt cleaner than rivals like Venice.ai or Hugging Chat, though it did not have image generation abilities that have actually ended up being basic in other places.
The combination of a system timely function lets users tweak the AI’s habits through customized guidelines, which is valuable for getting more accurate reactions– a stereo trigger can considerably increase a design’s efficiency.
The total experience will feel familiar to those utilized to playing with various chatbots.
This wasn’t an advanced leap in AI ability however rather a privacy-focused execution of existing open-source designs that are frequently less effective than mainstream options like GPT-4o or Deepseek.
The platform just handles plain text. It can craft a bedtime story or sum up patterns, however it does not have assistance for Excel files and can not correctly format CSV files, PDFs, or third-party docs.
Rather, users need to in fact copy the contents of a sheet and paste it, which is far from perfect. Those who puzzle decentralization with sluggish speeds have absolutely nothing to fret about.
The replies were quick, and the chatbot never ever hung. However the designs used the quality you ‘d get out of open-source LLMs that are not truly topping the charts in the LLM Arena– LLama 3.1 405b is presently in the 27th position and is the most effective design in Presearch’s lineup.
It’s okay, however it’s likewise not outstanding by today’s requirements.
There are presently some open-source applications that are a lot more effective at probably comparable sizes.
For instance, Llama-3.1- Nemotron-70B-Instruct might quickly replace the more recent (however not much better) Llama-3.3 -70 b, and Deepseek R1 is jumps ahead of Meta’s Llama 3.1 405b, being the very best open-source design to date.
In general, the experience was enjoyable; the designs carried out as anticipated, and the user interface was much easier to utilize than Venice AI, its primary rival.
If you are searching for a personal privacy option or wish to attempt every AI tool readily available today, this function is certainly worth an appearance. Simply think about that the online search engine will not change Google, and the AI chatbot will not change ChatGPT– a minimum of not yet.
Modified by Josh Quittner and Sebastian Sinclair
Normally Smart Newsletter
A weekly AI journey told by Gen, a generative AI design.