Behind the dismissal of more than 11,000 workersin February Meta explained in a blog postwhich already had its own artificial intelligence: call will be available with a non-commercial license to be used in academic and industrial research, civil society and government entities. The reason for this choice, according to the company, is to “maintain the integrity of the AI and prevent it from being misuse by any person”.
Now, information about this technology has been leakedbefore reaching the market, which puts Meta at a disadvantage in the face of competition that is already very strong. after the popular ChatGPT from OpenAI, google bard and the Bing with ChatGPTBig tech companies now have a very strong field to explore, which is artificial intelligence, and the competition to succeed already has strong standards.
Meta wanted to give researchers free access so that they knew their Llama, but the interface and its operation has been made public. Meta said it would release checkpoints for the model only to researchers who applied, but it seems someone has torrented the model.
Someone (not Meta) leaked the models and got the idea to announce the link via a GitHub pull request. They explain from Reddit that “saving bandwidth is a joke. Meta employees may not have noticed or are still thinking about how to react.” In the leaked documents there are examples of the answers that LLama offers to questions that users ask. It’s all in English.
ZAO, the Chinese MOBILE APP that through DEEPFAKE turns you into DICAPRIO in SECONDS
How is Llama according to what can be seen in the filtration

LLaMA shows that it has the ability to solve many tasks at a performance similar to GPT-3. In essence it is similar to DeepMind’s Chinchilla project (70B). Although, as the leaked document explains, unlike Chinchilla, PaLM or GPT-3, Meta’s AI uses public datawhich makes our work open-sourced compatible.
Checkpoints and tokenizer for LLaMA de Meta are now available for download via torrents thanks to some 4chan users (an internet forum where you can find many varied discussions, although sometimes very controversial as well). Meta asks you to fill out a form to gain access.
Many of the existing AIs are based on data that is not public or undocumented (for example, “Books – 2TB” or “Social Media Conversations”). In this document there is a summary of the modifications introduced in the architecture (Vaswani et al., 2017) and the llama training method. Some of the biases and toxicity encoded in these models are also discussed. Among its sources to train the models are Wikipedia, Github, English CommonCrawl or Arxiv, among others.
Yes we knew from before that this AI is trained with text in 20 different languages Although focused on those with Latin and Cyrillic alphabets, LLaMa takes sequences of words as inputs, allowing it to predict the next ones to recursively generate text. Meta explained that LlaMa is a small model that requires less power and computing resources.
Like every other artificial intelligence that has been released in recent months, LlaMa is not exempt from errors and inaccurate informationTherefore, they explain that they are currently working on it to minimize problems such as biases, toxic comments and other nonsense.