(PatrioticPost.com)- According to the Stanford University research team, their AI can compete with OpenAI’s ChatGPT, the industry leader in consumer-facing AI solutions. Despite their apparent simplicity and low construction cost, robust AIs present a distinct challenge when put into operation.
Artificial intelligence seems to be inexpensive to build. Although developing the technology may be done inexpensively, running it at scale would be another story.
The Stanford group built its AI, which they named Alpaca AI, on top of Facebook’s smallest and least expensive open-source language model, LLaMA 7b.
Reports show that after the Stanford researchers had the LLaMA 7B model running, they essentially requested GPT to take 175 pre-existing instruction/output pairs created by humans and generate 20 more with identical styles and structures.
The process was automated using one of OpenAI’s APIs, and the team quickly amassed a dataset of over 52,000 sample talks for use in the LLaMA model’s post-training. Less than $500 was spent on generating this mountain of training data.
The information was then utilized to adjust the parameters of the LLaMA model, a task that lasted three hours and took eight 80 GB and 100 cloud computing processors to complete. The price tag came in at around $100.
The researchers then compared their newly minted model, Alpaca, against ChatGPT’s foundational language model in several contexts, such as email, social networking, and productivity apps. Out of these evaluations, Alpaca scored 90, and GPT scored 89.
Artificial intelligence seems to be inexpensive to build. Technology development could be cheap, but running it at mass is another story.
Reports reveal that Artificial Intelligence (AI) executes billions of computations in its quest to produce a relevant answer to a prompt with each response it provides; this demands a tremendous lot of computer power, which is costly.
One firm spent several hundred thousand dollars per month, far more than the cost of a human employee, on AI processing to keep up with consumer demand.
Microsoft reports it is investing several hundred million dollars on a new supercomputer designed particularly to serve OpenAI and its applications in part due to the high running expenses of consumer-facing AI.