Connect with us

Artificial Intelligence

AI Can Now Tell You If You Have Skin Cancer

blank

Published

on

blank
Featured Image courtesy of Scripps.org

Cancer is scary, and skin cancer is no exception. Unlike some other cancers, it’s incredibly treatable if caught early. Cancerous moles, rashes, or lesions on the skin are easily detectable if you pay attention to your body and report any changes in your skin to a doctor. Soon you might be able to get an easy answer at home, as researchers at Stanford University have created an AI algorithm that is nearly as good as a trained doctor at spotting skin cancer.

The algorithm was trained on nearly 130,000 images of skin cancer and then tested against 21 trained dermatologists. Researchers say that the AI program performed “91% as good”, showing that artificial intelligence has nearly caught up to some of the most highly trained professionals in the health care industry. The researchers hope that some day soon there will be a mobile app to let you know if a mole or rash is cancerous, though obviously there’s a huge liability issue there.  If the AI algorithm were to tell someone a mole was non-cancerous only to have it turn up malignant, things could get very bad very fast.

If the algorithm were to tell someone a mole was non-cancerous only to have it turn up malignant, things could get very bad very fast. As mentioned above, skin cancer is easily treated if caught early, but also progresses faster than a lot of other cancers. Even if this future app were to be as good or better than a doctor at detecting cancer, I anticipate doctors will still be the front-runners with diagnoses for a long time. Doctors pay every month for malpractice insurance to cover them if they make mistakes, and an app doesn’t have that coverage. Anyone who makes an AI app that gives medical advice opens themselves up to lawsuits on the off-chance the algorithm is wrong.

Still, this app could prove immensely useful in the peace-of-mind it can provide for people worried about their health. While it isn’t a replacement for a doctor, this latest advancement in artificial intelligence shows us that computers are getting smarter. As time goes on, I anticipate AI will make its way into more and more aspects of our daily life.

Who doesn’t enjoy listening to a good story. Personally I love reading about the people who inspire me and what it took for them to achieve their success. As I am a bit of a self confessed tech geek I think there is no better way to discover these stories than by reading every day some articles or the newspaper . My bookcases are filled with good tech biographies, they remind me that anyone can be a success. So even if you come from an underprivileged part of society or you aren’t the smartest person in the room we all have a chance to reach the top. The same message shines in my beliefs. All it takes to succeed is a good idea, a little risk and a lot of hard work and any geek can become a success. VENI VIDI VICI .

Artificial Intelligence

Threads’s API for developers is now live

blank

Published

on

blank

Meta finally put out its long-awaited API for Threads today, so developers can start making games and apps that use it. Third-party developers will be able to create new experiences around

Mark Zuckerberg also posted about the launch of the API, saying, “The Threads API is now widely available and will be coming to more of you soon.”

Engineer for Threads Jesse Chen wrote in a blog post that developers can now use the new API to publish posts, get their own content, and set up reply management tools. In other words, developers can let users hide or show replies or reply to certain ones.

It will also have analytics that let developers see things like the number of views, likes, replies, reposts, and quotes at the media and account level, the company said.

Adam Mosseri, the CEO of Instagram, first talked about the company’s work on the Threads API in October 2023. The API was first released in a closed beta with partners like Techmeme, Sprinklr, Sprout Social, Social News Desk, Hootsuite, and a few other developers. Chen said at that time that Meta planned to let many developers use the API in June. As promised, the company kept its word.

Along with the launch of the new API, the company also put out an open-source reference app on GitHub so developers can play with it.

In 2023, it was hard for third-party developers who made tools for social networks because social networks like Twitter (now X) and Reddit limited or shut down API access at different levels. This is because decentralized social networks like Mastodon and Bluesky are more open to developers. With more than 150 million users, Meta’s Threads is the most popular new social network. Since Threads now works with the fediverse and has an API, third-party developers can make some great social media experiences.

Continue Reading

Artificial Intelligence

Apple has officially announced its intention to collaborate with Google’s Gemini platform in the future

blank

Published

on

blank

After delivering a keynote presentation at WWDC 2024, which unveiled Apple Intelligence and announced a collaboration with OpenAI to integrate ChatGPT into Siri, Senior Vice President Craig Federighi confirmed the intention to collaborate with more third-party models. The initial instance provided by the executive was one of the companies that Apple was considering for a potential partnership.

“In the future, we are excited about the prospect of integrating with other models, such as Google Gemini,” Federighi expressed during a post-keynote discussion. He promptly stated that the company currently has no announcements to make, but that is the overall direction they are heading in.

OpenAI’s ChatGPT is set to become the first external model to be integrated at a later date this year. Apple announces that users will have the ability to access the system without the requirement of creating an account or paying for premium services. Regarding the integration of that platform with the updated iOS 18 version of Siri, Federighi confirmed that the voice assistant will notify users before utilizing its own internal models.

“Now you can accomplish this task directly using Siri, without the need for any additional tools,” stated the Apple executive. “Siri, it is crucial to ascertain whether you will inquire before proceeding to ChatGPT.” Subsequently, you can engage in a dialogue with ChatGPT. Subsequently, if there is any pertinent data mentioned in your inquiry that you wish to provide to ChatGPT, we will inquire, ‘Would you like to transmit this photograph?’ From a privacy standpoint, you always maintain control and have complete visibility.

Continue Reading

Artificial Intelligence

Reinforcement learning AI has the potential to introduce humanoid robots into the real world

blank

Published

on

blank

AI tools like ChatGPT are revolutionizing our digital experiences, but the next frontier is bringing AI interactions into the physical world. Humanoid robots, trained with a specific AI, have the potential to be incredibly useful in various settings such as factories, space stations, and nursing homes. Two recent papers in Science Robotics emphasize the potential of reinforcement learning to bring robots like these into existence.

According to Ilija Radosavovic, a computer scientist at the University of California, Berkeley, there has been remarkable advancement in AI within the digital realm, thanks to tools like GPT. However, I believe that AI in the physical world holds immense potential for transformation.

The cutting-edge software that governs the movements of bipedal bots frequently employs a technique known as model-based predictive control. It has resulted in the development of highly advanced systems, like the Atlas robot from Boston Dynamics, known for its impressive parkour abilities. However, programming these robot brains requires a considerable amount of human expertise, and they struggle to handle unfamiliar situations. Using reinforcement learning, AI can learn through trial and error to perform sequences of actions, which may prove to be a more effective approach.

According to Tuomas Haarnoja, a computer scientist at Google DeepMind and coauthor of one of the Science Robotics papers, the team aimed to test the limits of reinforcement learning in real robots. Haarnoja and his team decided to create software for a toy robot named OP3, manufactured by Robotis. The team had the goal of teaching OP3 to walk and play one-on-one soccer.

“Soccer provides a favorable setting for exploring general reinforcement learning,” states Guy Lever of Google DeepMind, who coauthored the paper. It demands careful planning, adaptability, curiosity, collaboration, and a drive to succeed.

Operating and repairing larger robots can be quite challenging, but the smaller size of these robots allowed us to iterate quickly,” Haarnoja explains. Similar to a network architect, the researchers first trained the machine learning software on virtual robots before deploying it on real robots. This technique, called sim-to-real transfer, helps ensure that the software is well-prepared for the challenges it may face in the real world, such as the possibility of robots falling over and breaking.

The training of the virtual bots occurred in two stages. During the initial phase, the team focused on training one AI to successfully lift the virtual robot off the ground, while another AI was trained to score goals without losing balance. The AIs were provided with data that included the robot’s joint positions and movements, as well as the positions of other objects in the game captured by external cameras. In a recently published preprint, the team developed a version of the system that utilizes the robot’s visual capabilities. The AIs were required to generate fresh joint positions. If they excelled, their internal parameters were adjusted to promote further replication of the successful actions. During the second stage, the researchers developed an AI that could replicate the behavior of the first two AIs and evaluate its performance against opponents that were similar in skill level (versions of itself).

Similar to a network architect, the researchers adjusted different elements of the simulation, such as friction, sensor delays, and body-mass distribution, in order to fine-tune the control software, known as a controller, for the real-world robots. In addition to scoring goals, the AI was also recognized for its ability to minimize knee torque and prevent injuries.

Robots that were tested with the RL control software demonstrated impressive improvements in their performance. They walked at a significantly faster pace, turned with remarkable agility, and were able to recover from falls in a fraction of the time compared to robots using the scripted controller provided by the manufacturer. However, more sophisticated abilities also surfaced, such as seamlessly connecting actions. “It was fascinating to witness the robots acquiring more advanced motor skills,” comments Radosavovic, who was not involved in the study. And the controller acquired knowledge not only of individual moves, but also the strategic thinking needed to excel in the game, such as positioning oneself to block an opponent’s shot.

According to Joonho Lee, a roboticist at ETH Zurich, the soccer paper is truly impressive. “We have witnessed an unprecedented level of resilience from humanoids.”

But what about humanoid robots that are the size of humans? In another recent paper, Radosavovic collaborated with colleagues to develop a controller for a larger humanoid robot. This particular robot, Digit from Agility Robotics, is approximately five feet tall and possesses knees that bend in a manner reminiscent of an ostrich. The team’s approach resembled that of Google DeepMind. Both teams utilized computer brains known as neural networks. However, Radosavovic employed a specialized variant known as a transformer, which is commonly found in large language models such as those that power ChatGPT.

Instead of processing words and generating more words, the model analyzed 16 observation-action pairs. These pairs represented what the robot had sensed and done in the past 16 snapshots of time, which spanned approximately a third of a second. The model then determined the robot’s next action based on this information. Learning was made easier by initially focusing on observing the actual joint positions and velocity. This provided a solid foundation before progressing to the more challenging task of incorporating observations with added noise, which better reflected real-world conditions. For enhanced sim-to-real transfer, the researchers introduced slight variations to the virtual robot’s body and developed a range of virtual terrains, such as slopes, trip-inducing cables, and bubble wrap.

With extensive training in the digital realm, the controller successfully operated a real robot for an entire week of rigorous tests outdoors, ensuring that the robot maintained its balance without a single instance of falling over. In the lab, the robot successfully withstood external forces, even when an inflatable exercise ball was thrown at it. The controller surpassed the manufacturer’s non-machine-learning controller, effortlessly navigating a series of planks on the ground. While the default controller struggled to climb a step, the RL controller successfully overcame the obstacle, despite not encountering steps during its training.

Reinforcement learning has gained significant popularity in recent years, particularly in the field of four-legged locomotion. Interestingly, these studies have also demonstrated the successful application of these techniques to two-legged robots. According to Pulkit Agrawal, a computer scientist at MIT, these papers have reached a tipping point by either matching or surpassing manually defined controllers. With the immense potential of data, a multitude of capabilities can be unlocked within a remarkably brief timeframe.

It is highly probable that the approaches of the papers are complementary. In order to meet the demands of the future, AI robots will require the same level of resilience as Berkeley’s system and the same level of agility as Google DeepMind’s. In real-world soccer, both aspects are incorporated. Soccer has posed a significant challenge for the field of robotics and artificial intelligence for a considerable period, as noted by Lever.

 

Continue Reading

Trending