Connect with us

Artificial Intelligence

CES 2023 :Learn the latest information from the greatest technology event of the year

blank

Published

on

blank

Although the CES doesn’t start until tomorrow, we’re back in Vegas for the event, and several exhibitors have already shown their new items at numerous press conferences and media events. In addition to more news from TV manufacturers, gaming laptop manufacturers, smart home firms, and other companies, we are starting to see some of the early automotive news that typically headlines CES today. Here is a summary of the top news from Day 1 of CES 2023 in case you haven’t caught up yet.

Since last night
But first, even though we covered the most of yesterday’s launches in a different video, more things were announced last night after we had finished filming that. For instance, Withings demonstrated the $500 pee-scanning U-Scan toilet computer.

It’s a 90mm block that you install inside your toilet bowl as a deodorizer and employs a microfluidic device that functions like a litmus test to identify the components in your pee. Although Withings is developing a consumer-focused version that will evaluate your nutrition and hydration levels and forecast your ovulation and period cycles, you will need to decide the precise tests you wish to run in your module. Prior to launching in the US, it is still awaiting regulatory approval from the European Union.

We also witnessed the Fufuly pulsing cushion by Yukai Engineering, which was less… gross news. Although a vibrating cushion may sound like something out of an anime, the concept is that cuddling something that might simulate real-life pulsation may have calming effects. Another thing that could calm anxiety? watching a video of adorable birds! Additionally, Bird Buddy unveiled a brand-new intelligent feeder with a built-in camera so you can watch your feathered friends while they build nests. The most recent version, which is intended for hummingbirds, uses AI to recognize the different breeds that are in the area and, in conjunction with a motion sensor, determines when they are ready for a feast.

Speaking of nibbles, there was a ton of food-related technology news last night, like as the $1,000 stand mixer from GE Profile that has a digital scale and voice controls. We also observed OneThird’s freshness scanners, which determine the freshness of produce using near-infrared lasers and secret algorithms. Even the shelf life of an avocado can be determined instantly, preventing food waste!

We also witnessed the Wisear neural earbuds that let you control playback by clenching your jaw, the blood pressure monitor that hooks onto your finger from Valencell, and Loreal’s robotic lipstick applicator for people with limited hand or arm mobility. Smart speakers, smart pressure cookers, smart VR gloves, smart lights, and more were available.

Let’s move on to the recent news. Prior to the onslaught that is set to happen tomorrow, there was only a little trickle of auto news. Volkswagen debuted the ID.7 EV sedan, tempting us with only the name and a rough body form. BMW, meanwhile, revealed the I Vision Dee, or “Digital Emotional Experience,” to provide additional information about its futuristic I Vision concept vehicle development. It’s a simplified design with a heads-up display that spans the entire front windshield. Many of the Dee’s characteristics are anticipated to be incorporated into production vehicles starting in 2025, notably BMW’s new NEUE KLASSE (new class) EV platform. BMW’s Mixed Reality slider will also be available on the Dee to regulate how much digital stuff is shown on the display.

blank

TVs
The premium 2023 TVs from Samsung were also not unveiled until the evening, with this year’s models emphasizing on MiniLED and 8K technologies. Additionally, it added more sizes to its selection and unveiled new soundbars with Dolby Atmos capability at all price points. While this was going on, competitor LG unveiled a 97-inch M3 TV that can wirelessly receive 4K 120Hz content, allowing you to deal with fewer connections in your living room and… more soundbars. Leave it to LG and Samsung to essentially duplicate each other’s actions.

Hisense, a competitor with comparatively smaller TVs, today announced its 85-inch UX Mini LED TV, which has more than 5,000 local dimming zones and a maximum brightness of 2,500 nits. Startup Displace, meanwhile, demonstrated a brand-new 55-inch wireless OLED TV that can be attached to any surface via vacuum suction, doing away entirely with the requirement for a wall mount or stand. You can even live without a power cord thanks to its four inbuilt batteries. Essentially, this is a fully functional, portable TV.

Laptops

We also noticed more HP, MSI, and ASUS laptops. A laptop with glasses-free 3D, a sizable Zenbook Pro 16X with lots of space for thermal dissipation, and a Zenbook 14X with a ceramic build are all products of ASUS. Both of the latter Zenbooks include OLED displays. In the meantime, HP unveiled a new line of Dragonfly Pro laptops that are designed to simplify the purchasing process for customers by removing the majority of configuration options. The Windows version exclusively uses an AMD CPU and has a column of hotkeys on the right of the keyboard that provide shortcuts to camera settings, a control center, and 24/7 tech support, whilst the Dragonfly Pro Chromebook has an RGB keyboard and Android-like Material You theming capabilities. The last of these buttons can be programmed to open a particular program, file, or website.

The first of some audio news is now being presented to us, starting with JBL. The business presented its array of five soundbar models for 2023, all of which will support Dolby Atmos. New true wireless earbuds with a “smart” casing including a 1.45-inch touchscreen and controls for volume, playback, ANC, and EQ presets were also introduced. Nearly simultaneously, HP unveiled the Poly Voyager earphones, which are comparable to the JBL in terms of controls and have a touchscreen on the carrying case. However, the Voyager also features a Broadcast mode that enables you to connect the case to an older device with a headphone port (like while you’re on an airline) via the provided 3.5mm to USB-C connection, so you can view movies during a flight without having to bring along a second set of headphones.

Not only today but also the remainder of the week will see a ton more CES news. I was unable to tell you about Citizen’s latest wristwatch or Samsung’s new, more affordable Galaxy A14 smartphone. Keep checking back for updates on all CES 2023 news.

As Editor here at GeekReply, I'm a big fan of all things Geeky. Most of my contributions to the site are technology related, but I'm also a big fan of video games. My genres of choice include RPGs, MMOs, Grand Strategy, and Simulation. If I'm not chasing after the latest gear on my MMO of choice, I'm here at GeekReply reporting on the latest in Geek culture.

Artificial Intelligence

Gaming models are created by Auctoria using generative AI

blank

Published

on

blank

Aleksander Caban, co-founder of Polish VR game developer Carbon Studio, noticed a major problem in modern game design several years ago. He manually created rocks, hills, paths, and other video game environment elements, which was time-consuming and laborious.

Caban created tech to automate the process.

In collaboration with Michal Bugała, Joanna Zając, Karolina Koszuta, and Błażej Szaflik, he founded Auctoria, an AI-powered platform for creating 3D game assets. Auctoria, from Gliwice, Poland, is in Startup Battlefield 200 at Disrupt 2023.

Auctoria was founded on a passion for limitless creativity, according to Zając in an email interview. It was designed to help game developers, but anyone can use it. Few advanced tools exist for professionals; most are for hobbyists and amateurs. We want to change that.”

Using generative AI, Auctoria creates various video game models. One feature generates basic 3D game levels with pathways, while another converts uploaded images and textures of walls, floors, and columns into 3D versions.

Like DALL-E 2 and Midjourney, Auctoria can generate assets from text prompts. Or they can submit a sketch, which the platform will try to turn into a digital model.

blank

All AI algorithms and training data for Auctoria were developed in-house, according to Zając.

She said “Auctoria is based 100% on our content, so we’re not dependent on any other provider.” It’s independent—Auctoria doesn’t use open source or external engines.

In the emerging market for AI game asset generation tools, Auctoria isn’t alone. The 3DFY, Scenario, Kaedim, Mirage, and Hypothetic startups create 3D models. Even Nvidia and Autodesk are entering the space with apps like Get3D, which converts images to 3D models, and ClipForge, which generates models from text descriptions.

Meta also tried tech to create 3D assets from prompts. In December, OpenAI released Point-E, an AI that synthesizes 3D models for 3D printing, game design, and animation.

Given the size of the opportunity, the race to market new solutions isn’t surprising. According to Proficient Market Insights, 3D models could be worth $3.57 billion by 2028.

According to Zając, Auctoria’s two-year R&D cycle has led to a more robust and comprehensive toolset than rivals.

“Currently, AI-based software is lacking for creating complete 3D world models,” Zając stated. “3D editors and plugins offer only a fraction of Auctoria’s capabilities. Our team started developing the tool two years ago, giving us a ready-to-use product.”

Auctoria, like all generative AI startups, must deal with AI-generated media legal issues. Not yet clear how AI-generated works can be copyrighted in the U.S.

However, the Auctoria team of seven employees and five co-founders is delaying answering those questions. Instead, they’re piloting the tooling with game development studios like Caban’s Carbon Studio.

Before releasing Auctoria in the coming months, the company hopes to raise $5 million to “speed up the process” of creating back-end cloud services to scale the platform.

Zając stated that the funding would reduce the computing time required for creating worlds or 3D models with Auctoria. Achieving a software-as-a-service model requires both infrastructure and user experience enhancements, such as a simple UI, excellent customer service, and effective marketing. We’ll keep our core team small, but we’ll hire more by year’s end.”

Continue Reading

Artificial Intelligence

DALL-E 3, from OpenAI, lets artists skip training

blank

Published

on

blank

Today, OpenAI released an updated version of DALL-E, its text-to-image tool that uses ChatGPT, its viral AI chatbot, to make prompting easier.

Most modern, AI-powered image generation tools turn prompts—image descriptions—into photorealistic or fantastical artwork. However, writing the right prompt is so difficult that “prompt engineering” is becoming a profession.

New OpenAI tool DALL-E 3 uses ChatGPT to fill prompts. OpenAI’s premium ChatGPT plans, ChatGPT Plus and ChatGPT Enterprise, allow users to type in an image request and refine it with the chatbot, receiving the results in the chat app.

ChatGPT can make a few-word prompt more descriptive, guiding the DALL-E 3 model.

DALL-E 3 adds more than ChatGPT integration. OpenAI claims that DALL-E 3 produces better images that better reflect prompts, especially for longer prompts. It handles text and human hands better, which have previously hampered image-generating models.

blank

OpenAI claims DALL-E 3 has new algorithmic bias-reduction and safety mechanisms. For instance, DALL-E 3 will reject requests to depict living artists or public figures. Artists can now choose not to train future OpenAI text-to-image models with their work. (OpenAI and its rivals are being sued for using copyrighted artists’ work to train their generative AI image models.)

As the image-synthesizing generative AI race heats up, DALL-E 3 launches. Midjourney and Stability AI keep improving their image-generating models, putting pressure on OpenAI to keep up.

OpenAI will release DALL-E 3 to premium ChatGPT users in October, then research labs and API customers. The company did not say when or if it would release a free web tool like DALL-E 2 and the original model.

Continue Reading

Artificial Intelligence

Open-source Microsoft Novel protein-generating AI EvoDiff

blank

Published

on

blank

All diseases are based on proteins, natural molecules that perform vital cellular functions. Characterizing proteins can reveal disease mechanisms and ways to slow or reverse them, while creating proteins can lead to new drug classes.

The lab’s protein design process is computationally and human resource-intensive. It involves creating a protein structure that could perform a specific function in the body and then finding a protein sequence that could “fold” into that structure. To function, proteins must fold correctly into three-dimensional shapes.

Not everything has to be complicated.

Microsoft introduced EvoDiff, a general-purpose framework that generates “high-fidelity,” “diverse” proteins from protein sequences, this week. Unlike other protein-generating frameworks, EvoDiff doesn’t need target protein structure, eliminating the most laborious step.

Microsoft senior researcher Kevin Yang says EvoDiff, which is open source, could be used to create enzymes for new therapeutics, drug delivery, and industrial chemical reactions.

Yang, one of EvoDiff’s co-creators, told n an email interview that the platform will advance protein engineering beyond structure-function to sequence-first design. EvoDiff shows that ‘protein sequence is all you need’ to controllably design new proteins.

A 640-million-parameter model trained on data from all protein species and functional classes underpins EvoDiff. “Parameters” are the parts of an AI model learned from training data that define its skill at a problem, in this case protein generation. The model was trained using OpenFold sequence alignment data and UniRef50, a subset of UniProt, the UniProt consortium’s protein sequence and functional information database.

Modern image-generating models like Stable Diffusion and DALL-E 2 are diffusion models like EvoDiff. EvoDiff slowly subtracts noise from a protein made almost entirely of noise to move it closer to a protein sequence.

blank

Beyond image generation, diffusion models are being used to design novel proteins like EvoDiff, create music, and synthesize speech.

“If there’s one thing to take away [from EvoDiff], I think it’s this idea that we can — and should — do protein generation over sequence because of the generality, scale, and modularity we can achieve,” Microsoft senior researcher Ava Amini, another co-contributor, said via email. “Our diffusion framework lets us do that and control how we design these proteins to meet functional goals.”

EvoDiff can create new proteins and fill protein design “gaps,” as Amini noted. A protein amino acid sequence that meets criteria can be generated by the model from a part that binds to another protein.

EvoDiff can synthesize “disordered proteins” that don’t fold into a three-dimensional structure because it designs proteins in “sequence space” rather than structure. Disordered proteins enhance or decrease protein activity in biology and disease, like normal proteins.

EvoDiff research isn’t peer-reviewed yet. Microsoft data scientist Sarah Alamdari says the framework needs “a lot more scaling work” before it can be used commercially.

“This is just a 640-million-parameter model, and we may see improved generation quality if we scale up to billions,” Alamdari emailed. WeAI emonstrated some coarse-grained strategies, but to achieve even finer control, we would want to condition EvoDiff on text, chemical information, or other ways to specify the desired function.”

Next, the EvoDiff team will test the model’s lab-generated proteins for viability. Those who are will start work on the next framework.

Continue Reading

Trending