Artificial Intelligence
Meta announces significant layoffs involving 11,000 workers

The corporation has had a difficult year as a result of TikTok’s growth and the public (and costly) failure of its attempt to enter the metaverse.
Meta has declared that it will let go of 11,000 workers, or roughly 13% of the entire workforce. In a blog post announcing the news, CEO Mark Zuckerberg admitted blame for being too bullish about the company‘s prospects for growth in the wake of the epidemic rise.
According to Zuckerberg, “during the beginning of Covid, the world quickly migrated online and the explosion of e-commerce contributed to outsized revenue growth.” “Many individuals believed that this acceleration would be permanent and carry on long after the pandemic was over. I decided to dramatically boost our investments because I felt the same way. Sadly, things didn’t turn out the way I had hoped.
By reducing spending and staff, Zuckerberg said the firm would become “leaner and more efficient” and devote more resources to “a smaller number of high priority growth areas,” including as advertising, artificial intelligence, and the metaverse. According to Zuckerberg, the company’s hiring staff would be “disproportionately harmed” by the reduction. With today’s layoffs being the first significant reductions since the company’s foundation in 2004, Meta reported approximately 87,000 employees in September.
Why has Meta taken such a beating? Well, a predicted slowdown in the US economy has slowed momentum for many tech stocks, but the prospects of the company have also been impacted by fierce rivalry from competitors and poor strategy.
The phenomenally profitable ad business of Meta has been squeezed by TikTok’s growth and changes to Apple’s privacy policies, and the company’s investments in the developing metaverse are beginning to look increasingly misplaced. As of now in 2022, Meta has lost $9.4 billion on its metaverse technology, and it anticipates spending significantly more in the future. In the meantime, Horizon Worlds, the company’s primary metaverse social platform, is so unreliable and unpopular that Meta’s own managers have had to intimidate staff into using it.
Meta’s stock has plummeted as the bad news has mounted. Its market value has decreased by $700 billion over the last few weeks, and its stock price has fallen by more than 70% this year. However, the company’s stock price increased by more than 4% in pre-market trade after Zuckerberg announced the job layoffs.
However, Meta is not the only tech company experiencing significant layoffs. This week, Salesforce reported that it had fired hundreds of people; in August, Snap announced plans to reduce its employment by 20%; and, under the direction of its new owner, Elon Musk, Twitter fired thousands of employees.
Zuckeberg wrote in the blog post announcing Meta’s layoffs that employees in the US will receive 16 weeks of base salary plus an additional two weeks for each year of service, six months of health insurance, and assistance with finding new employment and resolving immigration concerns. Through the first quarter of 2023, the business would implement a hiring freeze, according to Zuckerberg, “with a tiny number of exceptions.”
The CEO of Meta concluded his letter to staff with a statement that seemed to be directed at outside onlookers, particularly those who were dubious of the company’s foray into the metaverse.
I think that as a firm, we are presently greatly underrated,” Zuckerberg stated. “Billions of individuals connect using our services, and our communities are always expanding. One of the most profitable companies ever founded, our main business has enormous future potential. We are also at the forefront of inventing the technologies that will determine how people will connect in the future and the next platform for computers.
Artificial Intelligence
Gaming models are created by Auctoria using generative AI

Aleksander Caban, co-founder of Polish VR game developer Carbon Studio, noticed a major problem in modern game design several years ago. He manually created rocks, hills, paths, and other video game environment elements, which was time-consuming and laborious.
Caban created tech to automate the process.
In collaboration with Michal Bugała, Joanna Zając, Karolina Koszuta, and Błażej Szaflik, he founded Auctoria, an AI-powered platform for creating 3D game assets. Auctoria, from Gliwice, Poland, is in Startup Battlefield 200 at Disrupt 2023.
Auctoria was founded on a passion for limitless creativity, according to Zając in an email interview. It was designed to help game developers, but anyone can use it. Few advanced tools exist for professionals; most are for hobbyists and amateurs. We want to change that.”
Using generative AI, Auctoria creates various video game models. One feature generates basic 3D game levels with pathways, while another converts uploaded images and textures of walls, floors, and columns into 3D versions.
Like DALL-E 2 and Midjourney, Auctoria can generate assets from text prompts. Or they can submit a sketch, which the platform will try to turn into a digital model.
All AI algorithms and training data for Auctoria were developed in-house, according to Zając.
She said “Auctoria is based 100% on our content, so we’re not dependent on any other provider.” It’s independent—Auctoria doesn’t use open source or external engines.
In the emerging market for AI game asset generation tools, Auctoria isn’t alone. The 3DFY, Scenario, Kaedim, Mirage, and Hypothetic startups create 3D models. Even Nvidia and Autodesk are entering the space with apps like Get3D, which converts images to 3D models, and ClipForge, which generates models from text descriptions.
Meta also tried tech to create 3D assets from prompts. In December, OpenAI released Point-E, an AI that synthesizes 3D models for 3D printing, game design, and animation.
Given the size of the opportunity, the race to market new solutions isn’t surprising. According to Proficient Market Insights, 3D models could be worth $3.57 billion by 2028.
According to Zając, Auctoria’s two-year R&D cycle has led to a more robust and comprehensive toolset than rivals.
“Currently, AI-based software is lacking for creating complete 3D world models,” Zając stated. “3D editors and plugins offer only a fraction of Auctoria’s capabilities. Our team started developing the tool two years ago, giving us a ready-to-use product.”
Auctoria, like all generative AI startups, must deal with AI-generated media legal issues. Not yet clear how AI-generated works can be copyrighted in the U.S.
However, the Auctoria team of seven employees and five co-founders is delaying answering those questions. Instead, they’re piloting the tooling with game development studios like Caban’s Carbon Studio.
Before releasing Auctoria in the coming months, the company hopes to raise $5 million to “speed up the process” of creating back-end cloud services to scale the platform.
Zając stated that the funding would reduce the computing time required for creating worlds or 3D models with Auctoria. Achieving a software-as-a-service model requires both infrastructure and user experience enhancements, such as a simple UI, excellent customer service, and effective marketing. We’ll keep our core team small, but we’ll hire more by year’s end.”
Artificial Intelligence
DALL-E 3, from OpenAI, lets artists skip training

Today, OpenAI released an updated version of DALL-E, its text-to-image tool that uses ChatGPT, its viral AI chatbot, to make prompting easier.
Most modern, AI-powered image generation tools turn prompts—image descriptions—into photorealistic or fantastical artwork. However, writing the right prompt is so difficult that “prompt engineering” is becoming a profession.
New OpenAI tool DALL-E 3 uses ChatGPT to fill prompts. OpenAI’s premium ChatGPT plans, ChatGPT Plus and ChatGPT Enterprise, allow users to type in an image request and refine it with the chatbot, receiving the results in the chat app.
ChatGPT can make a few-word prompt more descriptive, guiding the DALL-E 3 model.
DALL-E 3 adds more than ChatGPT integration. OpenAI claims that DALL-E 3 produces better images that better reflect prompts, especially for longer prompts. It handles text and human hands better, which have previously hampered image-generating models.
OpenAI claims DALL-E 3 has new algorithmic bias-reduction and safety mechanisms. For instance, DALL-E 3 will reject requests to depict living artists or public figures. Artists can now choose not to train future OpenAI text-to-image models with their work. (OpenAI and its rivals are being sued for using copyrighted artists’ work to train their generative AI image models.)
As the image-synthesizing generative AI race heats up, DALL-E 3 launches. Midjourney and Stability AI keep improving their image-generating models, putting pressure on OpenAI to keep up.
OpenAI will release DALL-E 3 to premium ChatGPT users in October, then research labs and API customers. The company did not say when or if it would release a free web tool like DALL-E 2 and the original model.
Artificial Intelligence
Open-source Microsoft Novel protein-generating AI EvoDiff

All diseases are based on proteins, natural molecules that perform vital cellular functions. Characterizing proteins can reveal disease mechanisms and ways to slow or reverse them, while creating proteins can lead to new drug classes.
The lab’s protein design process is computationally and human resource-intensive. It involves creating a protein structure that could perform a specific function in the body and then finding a protein sequence that could “fold” into that structure. To function, proteins must fold correctly into three-dimensional shapes.
Not everything has to be complicated.
Microsoft introduced EvoDiff, a general-purpose framework that generates “high-fidelity,” “diverse” proteins from protein sequences, this week. Unlike other protein-generating frameworks, EvoDiff doesn’t need target protein structure, eliminating the most laborious step.
Microsoft senior researcher Kevin Yang says EvoDiff, which is open source, could be used to create enzymes for new therapeutics, drug delivery, and industrial chemical reactions.
Yang, one of EvoDiff’s co-creators, told n an email interview that the platform will advance protein engineering beyond structure-function to sequence-first design. EvoDiff shows that ‘protein sequence is all you need’ to controllably design new proteins.
A 640-million-parameter model trained on data from all protein species and functional classes underpins EvoDiff. “Parameters” are the parts of an AI model learned from training data that define its skill at a problem, in this case protein generation. The model was trained using OpenFold sequence alignment data and UniRef50, a subset of UniProt, the UniProt consortium’s protein sequence and functional information database.
Modern image-generating models like Stable Diffusion and DALL-E 2 are diffusion models like EvoDiff. EvoDiff slowly subtracts noise from a protein made almost entirely of noise to move it closer to a protein sequence.
Beyond image generation, diffusion models are being used to design novel proteins like EvoDiff, create music, and synthesize speech.
“If there’s one thing to take away [from EvoDiff], I think it’s this idea that we can — and should — do protein generation over sequence because of the generality, scale, and modularity we can achieve,” Microsoft senior researcher Ava Amini, another co-contributor, said via email. “Our diffusion framework lets us do that and control how we design these proteins to meet functional goals.”
EvoDiff can create new proteins and fill protein design “gaps,” as Amini noted. A protein amino acid sequence that meets criteria can be generated by the model from a part that binds to another protein.
EvoDiff can synthesize “disordered proteins” that don’t fold into a three-dimensional structure because it designs proteins in “sequence space” rather than structure. Disordered proteins enhance or decrease protein activity in biology and disease, like normal proteins.
EvoDiff research isn’t peer-reviewed yet. Microsoft data scientist Sarah Alamdari says the framework needs “a lot more scaling work” before it can be used commercially.
“This is just a 640-million-parameter model, and we may see improved generation quality if we scale up to billions,” Alamdari emailed. WeAI emonstrated some coarse-grained strategies, but to achieve even finer control, we would want to condition EvoDiff on text, chemical information, or other ways to specify the desired function.”
Next, the EvoDiff team will test the model’s lab-generated proteins for viability. Those who are will start work on the next framework.
- Gadgets8 years ago
Why the Nexus 7 is still a good tablet in 2015
- Mobile Devices8 years ago
Samsung Galaxy Note 4 vs Galaxy Note 5: is there room for improvement?
- Editorials8 years ago
Samsung Galaxy Note 4 – How bad updates prevent people from enjoying their phones
- Mobile Devices8 years ago
Nexus 5 2015 and Android M born to be together
- Gaming8 years ago
New Teaser For Five Nights At Freddy’s 4
- Mobile Devices8 years ago
Google not releasing Android M to Nexus 7
- Gadgets9 years ago
Moto G Android 5.0.2 Lollipop still has a memory leak bug
- Mobile Devices8 years ago
Nexus 7 2015: Huawei and Google changing the game