Connect with us

Artificial Intelligence

CES 2023 :Learn the latest information from the greatest technology event of the year

blank

Published

on

blank

Although the CES doesn’t start until tomorrow, we’re back in Vegas for the event, and several exhibitors have already shown their new items at numerous press conferences and media events. In addition to more news from TV manufacturers, gaming laptop manufacturers, smart home firms, and other companies, we are starting to see some of the early automotive news that typically headlines CES today. Here is a summary of the top news from Day 1 of CES 2023 in case you haven’t caught up yet.

Since last night
But first, even though we covered the most of yesterday’s launches in a different video, more things were announced last night after we had finished filming that. For instance, Withings demonstrated the $500 pee-scanning U-Scan toilet computer.

It’s a 90mm block that you install inside your toilet bowl as a deodorizer and employs a microfluidic device that functions like a litmus test to identify the components in your pee. Although Withings is developing a consumer-focused version that will evaluate your nutrition and hydration levels and forecast your ovulation and period cycles, you will need to decide the precise tests you wish to run in your module. Prior to launching in the US, it is still awaiting regulatory approval from the European Union.

We also witnessed the Fufuly pulsing cushion by Yukai Engineering, which was less… gross news. Although a vibrating cushion may sound like something out of an anime, the concept is that cuddling something that might simulate real-life pulsation may have calming effects. Another thing that could calm anxiety? watching a video of adorable birds! Additionally, Bird Buddy unveiled a brand-new intelligent feeder with a built-in camera so you can watch your feathered friends while they build nests. The most recent version, which is intended for hummingbirds, uses AI to recognize the different breeds that are in the area and, in conjunction with a motion sensor, determines when they are ready for a feast.

Speaking of nibbles, there was a ton of food-related technology news last night, like as the $1,000 stand mixer from GE Profile that has a digital scale and voice controls. We also observed OneThird’s freshness scanners, which determine the freshness of produce using near-infrared lasers and secret algorithms. Even the shelf life of an avocado can be determined instantly, preventing food waste!

We also witnessed the Wisear neural earbuds that let you control playback by clenching your jaw, the blood pressure monitor that hooks onto your finger from Valencell, and Loreal’s robotic lipstick applicator for people with limited hand or arm mobility. Smart speakers, smart pressure cookers, smart VR gloves, smart lights, and more were available.

Let’s move on to the recent news. Prior to the onslaught that is set to happen tomorrow, there was only a little trickle of auto news. Volkswagen debuted the ID.7 EV sedan, tempting us with only the name and a rough body form. BMW, meanwhile, revealed the I Vision Dee, or “Digital Emotional Experience,” to provide additional information about its futuristic I Vision concept vehicle development. It’s a simplified design with a heads-up display that spans the entire front windshield. Many of the Dee’s characteristics are anticipated to be incorporated into production vehicles starting in 2025, notably BMW’s new NEUE KLASSE (new class) EV platform. BMW’s Mixed Reality slider will also be available on the Dee to regulate how much digital stuff is shown on the display.

blank

TVs
The premium 2023 TVs from Samsung were also not unveiled until the evening, with this year’s models emphasizing on MiniLED and 8K technologies. Additionally, it added more sizes to its selection and unveiled new soundbars with Dolby Atmos capability at all price points. While this was going on, competitor LG unveiled a 97-inch M3 TV that can wirelessly receive 4K 120Hz content, allowing you to deal with fewer connections in your living room and… more soundbars. Leave it to LG and Samsung to essentially duplicate each other’s actions.

Hisense, a competitor with comparatively smaller TVs, today announced its 85-inch UX Mini LED TV, which has more than 5,000 local dimming zones and a maximum brightness of 2,500 nits. Startup Displace, meanwhile, demonstrated a brand-new 55-inch wireless OLED TV that can be attached to any surface via vacuum suction, doing away entirely with the requirement for a wall mount or stand. You can even live without a power cord thanks to its four inbuilt batteries. Essentially, this is a fully functional, portable TV.

Laptops

We also noticed more HP, MSI, and ASUS laptops. A laptop with glasses-free 3D, a sizable Zenbook Pro 16X with lots of space for thermal dissipation, and a Zenbook 14X with a ceramic build are all products of ASUS. Both of the latter Zenbooks include OLED displays. In the meantime, HP unveiled a new line of Dragonfly Pro laptops that are designed to simplify the purchasing process for customers by removing the majority of configuration options. The Windows version exclusively uses an AMD CPU and has a column of hotkeys on the right of the keyboard that provide shortcuts to camera settings, a control center, and 24/7 tech support, whilst the Dragonfly Pro Chromebook has an RGB keyboard and Android-like Material You theming capabilities. The last of these buttons can be programmed to open a particular program, file, or website.

The first of some audio news is now being presented to us, starting with JBL. The business presented its array of five soundbar models for 2023, all of which will support Dolby Atmos. New true wireless earbuds with a “smart” casing including a 1.45-inch touchscreen and controls for volume, playback, ANC, and EQ presets were also introduced. Nearly simultaneously, HP unveiled the Poly Voyager earphones, which are comparable to the JBL in terms of controls and have a touchscreen on the carrying case. However, the Voyager also features a Broadcast mode that enables you to connect the case to an older device with a headphone port (like while you’re on an airline) via the provided 3.5mm to USB-C connection, so you can view movies during a flight without having to bring along a second set of headphones.

Not only today but also the remainder of the week will see a ton more CES news. I was unable to tell you about Citizen’s latest wristwatch or Samsung’s new, more affordable Galaxy A14 smartphone. Keep checking back for updates on all CES 2023 news.

As Editor here at GeekReply, I'm a big fan of all things Geeky. Most of my contributions to the site are technology related, but I'm also a big fan of video games. My genres of choice include RPGs, MMOs, Grand Strategy, and Simulation. If I'm not chasing after the latest gear on my MMO of choice, I'm here at GeekReply reporting on the latest in Geek culture.

Artificial Intelligence

Apple has officially announced its intention to collaborate with Google’s Gemini platform in the future

blank

Published

on

blank

After delivering a keynote presentation at WWDC 2024, which unveiled Apple Intelligence and announced a collaboration with OpenAI to integrate ChatGPT into Siri, Senior Vice President Craig Federighi confirmed the intention to collaborate with more third-party models. The initial instance provided by the executive was one of the companies that Apple was considering for a potential partnership.

“In the future, we are excited about the prospect of integrating with other models, such as Google Gemini,” Federighi expressed during a post-keynote discussion. He promptly stated that the company currently has no announcements to make, but that is the overall direction they are heading in.

OpenAI’s ChatGPT is set to become the first external model to be integrated at a later date this year. Apple announces that users will have the ability to access the system without the requirement of creating an account or paying for premium services. Regarding the integration of that platform with the updated iOS 18 version of Siri, Federighi confirmed that the voice assistant will notify users before utilizing its own internal models.

“Now you can accomplish this task directly using Siri, without the need for any additional tools,” stated the Apple executive. “Siri, it is crucial to ascertain whether you will inquire before proceeding to ChatGPT.” Subsequently, you can engage in a dialogue with ChatGPT. Subsequently, if there is any pertinent data mentioned in your inquiry that you wish to provide to ChatGPT, we will inquire, ‘Would you like to transmit this photograph?’ From a privacy standpoint, you always maintain control and have complete visibility.

Continue Reading

Artificial Intelligence

Reinforcement learning AI has the potential to introduce humanoid robots into the real world

blank

Published

on

blank

AI tools like ChatGPT are revolutionizing our digital experiences, but the next frontier is bringing AI interactions into the physical world. Humanoid robots, trained with a specific AI, have the potential to be incredibly useful in various settings such as factories, space stations, and nursing homes. Two recent papers in Science Robotics emphasize the potential of reinforcement learning to bring robots like these into existence.

According to Ilija Radosavovic, a computer scientist at the University of California, Berkeley, there has been remarkable advancement in AI within the digital realm, thanks to tools like GPT. However, I believe that AI in the physical world holds immense potential for transformation.

The cutting-edge software that governs the movements of bipedal bots frequently employs a technique known as model-based predictive control. It has resulted in the development of highly advanced systems, like the Atlas robot from Boston Dynamics, known for its impressive parkour abilities. However, programming these robot brains requires a considerable amount of human expertise, and they struggle to handle unfamiliar situations. Using reinforcement learning, AI can learn through trial and error to perform sequences of actions, which may prove to be a more effective approach.

According to Tuomas Haarnoja, a computer scientist at Google DeepMind and coauthor of one of the Science Robotics papers, the team aimed to test the limits of reinforcement learning in real robots. Haarnoja and his team decided to create software for a toy robot named OP3, manufactured by Robotis. The team had the goal of teaching OP3 to walk and play one-on-one soccer.

“Soccer provides a favorable setting for exploring general reinforcement learning,” states Guy Lever of Google DeepMind, who coauthored the paper. It demands careful planning, adaptability, curiosity, collaboration, and a drive to succeed.

Operating and repairing larger robots can be quite challenging, but the smaller size of these robots allowed us to iterate quickly,” Haarnoja explains. Similar to a network architect, the researchers first trained the machine learning software on virtual robots before deploying it on real robots. This technique, called sim-to-real transfer, helps ensure that the software is well-prepared for the challenges it may face in the real world, such as the possibility of robots falling over and breaking.

The training of the virtual bots occurred in two stages. During the initial phase, the team focused on training one AI to successfully lift the virtual robot off the ground, while another AI was trained to score goals without losing balance. The AIs were provided with data that included the robot’s joint positions and movements, as well as the positions of other objects in the game captured by external cameras. In a recently published preprint, the team developed a version of the system that utilizes the robot’s visual capabilities. The AIs were required to generate fresh joint positions. If they excelled, their internal parameters were adjusted to promote further replication of the successful actions. During the second stage, the researchers developed an AI that could replicate the behavior of the first two AIs and evaluate its performance against opponents that were similar in skill level (versions of itself).

Similar to a network architect, the researchers adjusted different elements of the simulation, such as friction, sensor delays, and body-mass distribution, in order to fine-tune the control software, known as a controller, for the real-world robots. In addition to scoring goals, the AI was also recognized for its ability to minimize knee torque and prevent injuries.

Robots that were tested with the RL control software demonstrated impressive improvements in their performance. They walked at a significantly faster pace, turned with remarkable agility, and were able to recover from falls in a fraction of the time compared to robots using the scripted controller provided by the manufacturer. However, more sophisticated abilities also surfaced, such as seamlessly connecting actions. “It was fascinating to witness the robots acquiring more advanced motor skills,” comments Radosavovic, who was not involved in the study. And the controller acquired knowledge not only of individual moves, but also the strategic thinking needed to excel in the game, such as positioning oneself to block an opponent’s shot.

According to Joonho Lee, a roboticist at ETH Zurich, the soccer paper is truly impressive. “We have witnessed an unprecedented level of resilience from humanoids.”

But what about humanoid robots that are the size of humans? In another recent paper, Radosavovic collaborated with colleagues to develop a controller for a larger humanoid robot. This particular robot, Digit from Agility Robotics, is approximately five feet tall and possesses knees that bend in a manner reminiscent of an ostrich. The team’s approach resembled that of Google DeepMind. Both teams utilized computer brains known as neural networks. However, Radosavovic employed a specialized variant known as a transformer, which is commonly found in large language models such as those that power ChatGPT.

Instead of processing words and generating more words, the model analyzed 16 observation-action pairs. These pairs represented what the robot had sensed and done in the past 16 snapshots of time, which spanned approximately a third of a second. The model then determined the robot’s next action based on this information. Learning was made easier by initially focusing on observing the actual joint positions and velocity. This provided a solid foundation before progressing to the more challenging task of incorporating observations with added noise, which better reflected real-world conditions. For enhanced sim-to-real transfer, the researchers introduced slight variations to the virtual robot’s body and developed a range of virtual terrains, such as slopes, trip-inducing cables, and bubble wrap.

With extensive training in the digital realm, the controller successfully operated a real robot for an entire week of rigorous tests outdoors, ensuring that the robot maintained its balance without a single instance of falling over. In the lab, the robot successfully withstood external forces, even when an inflatable exercise ball was thrown at it. The controller surpassed the manufacturer’s non-machine-learning controller, effortlessly navigating a series of planks on the ground. While the default controller struggled to climb a step, the RL controller successfully overcame the obstacle, despite not encountering steps during its training.

Reinforcement learning has gained significant popularity in recent years, particularly in the field of four-legged locomotion. Interestingly, these studies have also demonstrated the successful application of these techniques to two-legged robots. According to Pulkit Agrawal, a computer scientist at MIT, these papers have reached a tipping point by either matching or surpassing manually defined controllers. With the immense potential of data, a multitude of capabilities can be unlocked within a remarkably brief timeframe.

It is highly probable that the approaches of the papers are complementary. In order to meet the demands of the future, AI robots will require the same level of resilience as Berkeley’s system and the same level of agility as Google DeepMind’s. In real-world soccer, both aspects are incorporated. Soccer has posed a significant challenge for the field of robotics and artificial intelligence for a considerable period, as noted by Lever.

 

Continue Reading

Artificial Intelligence

Paul Graham asserts Sam Altman did not receive a termination from his position at Y Combinator

blank

Published

on

blank

Paul Graham, the co-founder of startup accelerator Y Combinator, refuted allegations that Sam Altman, the CEO of OpenAI, was forced to step down as president of Y Combinator in 2019 because of possible conflicts of interest. Graham expressed his disagreement in a series of posts on X on Thursday.

“There have been allegations that Y Combinator terminated Sam Altman,” Graham states. “That statement is false.”

Altman joined Y Combinator as a partner in 2011, initially working there part-time. In February 2014, Graham appointed him as the president of Y Combinator.

Altman, together with Elon Musk, Peter Thiel, Jessica Livingston (a founding partner of Y Combinator), and other individuals, established OpenAI as a nonprofit organization in 2015. They successfully raised $1 billion for this venture.

For a number of years, Altman divided his time between Y Combinator and OpenAI, effectively managing both organizations. However, as per Graham’s account, when OpenAI made the announcement in 2019 about creating a profit-making subsidiary with Altman as the CEO, Livingston informed Altman that he had to make a decision between OpenAI and Y Combinator.

Graham writes that they informed him that if he intended to dedicate himself entirely to OpenAI, they would need to appoint a different person to manage YC, and he consented to this arrangement. “Even if he had stated his intention to appoint another CEO for OpenAI in order to fully dedicate himself to YC, we would have accepted that as well.”

Graham’s account of the events contradicts the reported information that Altman was compelled to step down from Y Combinator due to allegations made by the accelerator’s partners. These allegations claimed that Altman prioritized his personal projects, such as OpenAI, over his responsibilities as president. According to The Washington Post, Graham abruptly ended a trip abroad in November in order to personally fire Altman.

Helen Toner, a former member of the OpenAI board, along with others, attempted to remove Altman as CEO due to allegations of deceptive behavior. However, Altman managed to regain his position. Toner also stated on the Ted AI Show podcast that the real reasons behind Altman’s departure from Y Combinator were concealed at the time.

Allegedly, certain partners at Y Combinator expressed concern about the indirect ownership that Altman had in OpenAI while serving as Y Combinator’s president. Y Combinator’s late-stage fund has made a $10 million investment in OpenAI’s subsidiary that operates for-profit.

However, Graham asserts that the investment occurred prior to Altman becoming a full-time employee at OpenAI, and Graham himself was unaware of it.

“The funds did not make a significant investment,” Graham wrote. “Clearly, it had no impact on me, as I only became aware of it 5 minutes ago.”

Bret Taylor and Larry Summers, members of the OpenAI board, wrote an op-ed in The Economist that appears noticeably timed with Graham’s social media posts. This op-ed challenges the claims made by Toner and Tasha McCauley, both former OpenAI board members, that Altman lacks the ability to “consistently resist the influence of profit motives.”

Toner and McCauley’s argument may be valid. According to The Information, Altman is contemplating transforming OpenAI into a profit-making corporation due to pressure from investors, notably Microsoft, who are urging the company to focus on commercial ventures.

Continue Reading

Trending