Connect with us


iOS 9 release date set for Summer within the AppleSeed program





Good news Apple fans! iOS 9 seems to already be in the works and might hit your iOS device earlier than anybody would have expected! After the controversial (buggy) release of iOS 8, it seems natural that Tim Cook and co. are diligently working at top speeds towards new software releases that would make the iOS experience more friendly to the user. iOS 9 will be the final installment in a series of updates meant to fix the problems still present in the last iOS release, namely iOS 8.2. According to a 9to5Mac report, Apple is planning to release quite a few updates before iOS 9 is ready to be launched for the iPhone and the iPad.

Supposedly, the iOS 9 release date is set somewhere in the middle of the upcoming Summer, most likely to show up with a companion such as the iPhone 6S or even the iPhone 7. Rumors say that the Cupertino-based company will be launching two flagship phones this year, the iPhone 6S first, followed by the iPhone 7, but it’s unconfirmed so take it with a grain of salt. Still, the news that iOS 9 is coming this Summer further fuels the conviction of Apple enthusiasts that the iPhone 6S will be launched before we hit Halloween. If the new software release is set for say, June 2015, then it’s very likely that Apple release a phone alongside iOS 9 to give the launch a bit of pizzaz and hype.

Before getting to iOS 9 though, Apple has a few plans in the software side of things. 9to5Mac thinks that first, iOS 8.3 will be launched in March sometime, as the second beta will be sent out to developers next week. What’s more is that iOS 8.3 won’t be released to developers only, as Apple is planning to launch their AppleSeed program in March, which suggest that iOS 8.3 will be released as the first ever public beta from the company (in mobile software, that is). Exciting news! Moreover, we have learned that iOS 8.3 comes with the code-name Stowe and its official launch will be followed by another public beta of iOS 8.4, code-named Copper.

The final release in this cycle will be Monarch, or iOS 9, and it will be debuted at the Apple Worldwide Developer Conference in June, after which it will be sent out as a public beta, which is exciting and interesting, not to mention a good move following the bumpy software release last year. Apple’s AppleSeed program is a very good idea, seeing as users are still complaining about iOS 8 and the bugs it brings forth on devices, especially on the iPhone 5S. With the program, future iOS releases, including iOS 9 would be subject to public scrutiny and users will be able to report each bug they find in the software, so that Apple can make the final release of the software as perfect as possible. That official release will most likely fall to… well, Fall 2015 when the iPhone 7 will supposedly be released.

It seems that the public beta program started with OS X Yosemite has paid off for both Apple and its fans, as it has set the groundwork for AppleSeed. The OS X Yosemite attempt showed the company that it’s a good idea to get user feedback before officially releasing software, as Apple fans can be very strict when it comes to bugs on their devices, hence the uproar about iOS 8. As for what iOS 9 will bring to the table, we’re not entirely sure, but sources say the release will focus on optimization, stability and a few new features. Whatever iOS 9 runs out to be, we’re very curious about how the AppleSeed program will unfold.

As part of the editorial team here at Geekreply, John spends a lot of his time making sure each article is up to snuff. That said, he also occasionally pens articles on the latest in Geek culture. From Gaming to Science, expect the latest news fast from John and team.


Airchat, developed by Naval Ravikant, is a social application that focuses on conversation rather than written messages





Airchat is a recently developed social media application that promotes and encourages users to engage in open and spontaneous conversations.

Last year, a previous iteration of Airchat was released. However, yesterday the team, which included Naval Ravikant, the founder of AngelList, and Brian Norgard, a former product executive for Tinder, rebuilt the application and reintroduced it on both iOS and Android platforms. At present, Airchat is exclusively accessible via invitation. However, it has already achieved a ranking of #27 in the social networking category on Apple’s App Store.

Airchat has a user interface that is visually familiar and easy to understand. Users can follow other users, navigate through a feed of posts, and interact with those posts by replying, liking, and sharing them. The distinction comes from the fact that the content consists of audio recordings for both posts and replies, which are subsequently converted into written form by the application.

Airchat automatically starts sending messages, which you can quickly navigate through by vertically swiping up and down. If you have the desire, you have the option to pause the audio and only read the text. Additionally, users have the capability to exchange photographs and videos. However, it appears that audio is the main point of interest for everyone, and Ravikant explains that it has the potential to significantly change the way social apps function, especially when contrasted to text-based platforms.


Upon my recent enrollment in Airchat, the majority of the messages I encountered pertained to the application itself. Notably, Ravikant and Norgard actively engaged in responding to inquiries and seeking input from users.

“All humans are inherently capable of harmonious interactions with one another; it simply necessitates the use of our innate communication abilities,” Ravikant stated. “The prevalence of online text-only media has created the false belief that people are unable to get along, when in reality, everyone is capable of getting along.”

Past instances have seen digital entrepreneurs placing their bets on speech as the upcoming significant trend in social media. However, Airchat’s utilization of asynchronous, threaded messages provides a distinct experience compared to the transient live chat rooms that briefly gained popularity on Clubhouse and Twitter Spaces. Norgard claimed that this method eliminates the obstacle of stage fright when it comes to participation, as individuals have the freedom to make multiple attempts at producing a message without anybody being aware.

Indeed, he stated that during discussions with the first users, the team discovered that the majority of individuals currently utilizing AirChat exhibit introverted and timid characteristics.

Personally, I have not yet persuaded myself to publish anything. I was primarily intrigued by observing how other individuals were utilizing the application. Additionally, I had a complex emotional connection with the auditory perception of my own speech.

However, there is value in listening to Ravikant and Norgard articulate their perspective instead of solely relying on written transcriptions, as the latter may overlook subtle aspects such as excitement and tone. I am particularly interested in observing how deadpan humor and shitposting are conveyed, or not, in audio format.

I also encountered some difficulty with the velocity. The application automatically sets the audio playing to double the normal speed, which I found to be artificial, especially considering that the main purpose is to promote human interaction. To reset the speed, simply press and hold the pause button. However, when the speed is set to 1x, I observed that I would begin to skim through longer postings while listening, and I would often jump forward before listening to the entire audio. However, perhaps that is acceptable.


However, Ravikant’s conviction in the efficacy of speech to reduce hostility does not always obviate the requirement for content-filtering functionalities. According to him, the feed operates based on intricate regulations that aim to conceal spam, trolls, and those that either you or they may prefer not to receive messages from. However, at the time of publication, he had not yet replied to a subsequent user inquiry regarding content moderation.

When questioned about monetization, namely the introduction of advertisements, whether in audio format or otherwise, Ravikant stated that the company is currently not under any obligation to generate revenue. (He characterized himself as “not the exclusive investor” but rather as a significant stakeholder in the company.)

“Monetization is of little importance to me,” he stated. “We will operate this project with minimal financial resources if necessary.”

Continue Reading


Google fully embraces generative artificial intelligence (AI) during the Google Cloud Next event





30,000 individuals gathered in Las Vegas this week to receive the most up-to-date and innovative information from Google Cloud. The only thing they heard was continuous generative AI. Google Cloud primarily operates as a vendor that provides cloud infrastructure and platform services. Without prior knowledge, it is possible that you overlooked this information amidst the abundance of AI news.

While acknowledging the impressive showcase by Google, it is worth noting that, similar to Salesforce’s event in New York City last year, the company did not focus much on its primary operations, save when discussing generative AI.

Google unveiled a range of artificial intelligence (AI) improvements aimed at assisting users in leveraging the Gemini large language model (LLM) and enhancing productivity throughout the platform. Undoubtedly, it is a commendable objective. During the primary keynote on Day 1 and the subsequent Developer Keynote, Google incorporated a substantial number of demonstrations to exemplify the capabilities of these solutions.

However, several of them were overly basic, especially considering the constraint of being condensed into a keynote presentation with a restricted time frame. Their reliance was primarily on cases within the Google ecosystem, despite the fact that most companies store a significant portion of their data in repositories that are not affiliated with Google.

Several of the examples seemed feasible without the use of artificial intelligence. During an e-commerce demonstration, the presenter contacted the vendor to finalize an online transaction. The purpose of its design was to demonstrate the communication capabilities of a sales bot. However, in reality, the customer could have completed the task on the website with ease.

Generative AI possesses powerful applications, such as code generation, content analysis with query capabilities, and log data analysis to determine the cause of a website outage. In addition, the company has produced task- and role-based agents that may effectively utilize generative AI to assist individual developers, creative individuals, employees, and others.

However, when it comes to developing AI tools using Google’s models, rather than using the ones created by Google and other vendors for their customers, I couldn’t help but notice that they were downplaying many of the challenges that could hinder the successful implementation of generative AI. Despite attempts to downplay the difficulty, the truth is that integrating advanced technology into large businesses poses a significant obstacle.

Significant transformation is not simple
Similar to earlier technological advancements over the previous 15 years, such as mobile, cloud, containerization, and marketing automation, there have been numerous claims of potential benefits. However, each of these developments brings up its own degree of intricacy, and prominent corporations proceed with more prudence than we could envision. Artificial intelligence (AI) appears to require significantly more effort and resources than what Google or other major vendors are openly acknowledging.

Our experience with past technological revolutions has shown that they often generate excessive excitement and ultimately result in widespread disappointment. After several years, we continue to observe prominent corporations that, while having the opportunity, are only experimenting with or completely abstaining from utilizing these sophisticated technologies, even long after their introduction.

Companies may fail to take advantage of technological innovation due to various factors. These include organizational inertia, a rigid technology infrastructure that hinders the adoption of newer solutions, and a group of individuals within the company who oppose any well-intentioned initiatives. These individuals may belong to legal, HR, IT, or other departments and may reject substantive change for reasons such as internal politics.

Vineet Jain, the Chief Executive Officer (CEO) of Egnyte, a company specializing in storage, governance, and security, identifies two categories of companies: those that have already made a substantial transition to the cloud and will find it relatively easier to adopt generative AI, and those that have been slow in embracing new technologies and are likely to face challenges in adopting generative AI.

He engages in conversations with numerous firms that predominantly rely on on-premises technology and have a significant amount of progress to make before considering the potential benefits of AI. “We engage with numerous ‘late’ cloud adopters who have either not initiated or are in the initial stages of their pursuit of digital transformation,” Jain informed.

The introduction of AI may compel these organizations to carefully consider pursuing digital transformation, although they may encounter difficulties due to their significant lag in progress, according to his statement. “Before incorporating AI, these companies must first address and resolve the existing issues and establish a robust data security and governance framework,” he stated.

The data was consistently the main factor
Major industry players such as Google present the implementation of these solutions as straightforward, but the apparent simplicity on the surface does not guarantee that it is not complex behind the scenes. Throughout this week, I frequently encountered the notion that the quality of the data used to train Gemini and other extensive language models is crucial. It is evident that if the input data is of poor quality, the output generated by generative AI will also be of poor quality.

The process begins with the collection and analysis of data. If your data is not organized, it will be challenging to prepare it for training the LLMs for your specific use case. Kashif Rahamatullah, a principal at Deloitte responsible for overseeing the Google Cloud practice, expressed his admiration for Google’s recent developments. However, he also highlighted that certain firms without organized data may encounter difficulties when using generative AI solutions. “The initial AI conversation often transitions into a focus on data cleaning and consolidation, as this is crucial for maximizing the benefits of generative AI,” Rahamatullah explained.

Google has developed generative AI tools to facilitate data engineers in constructing data pipelines that connect to both internal and external data sources within the Google ecosystem. “The purpose is to enhance the efficiency of data engineering teams by automating the labor-intensive tasks associated with data movement and preparation for these models,” explained Gerrit Kazmaier, Google’s Vice President and General Manager for Database, Data Analytics, and Looker, in an interview with.

This will be beneficial for data integration and data cleansing, particularly in firms that have made significant progress in their digital transformation. However, for firms like the ones mentioned by Jain, who have not made significant progress in terms of digital transformation, these tools developed by Google could pose further challenges.

Furthermore, it is important to note that AI presents additional hurdles beyond mere implementation. According to Andy Thurai, an analyst at Constellation Research, this is true whether one is developing an application based on an existing model or attempting to create a customized model. “During the implementation of either solution, companies must consider governance, liability, security, privacy, ethical and responsible use, and compliance with these implementations,” stated Thurai. And all of that is significant.

Executives, IT professionals, developers, and other attendees of GCN this week may have sought insights into Google Cloud’s future offerings. However, if they were not actively seeking AI or if they were not adequately prepared as an organization, they may have left Sin City feeling overwhelmed by Google’s intense focus on AI. Organizations that lack digital expertise may need a significant amount of time before they can fully utilize these technologies, especially those that are not as comprehensive as the solutions provided by Google and other suppliers.

Continue Reading


The implementation of generative AI in healthcare is imminent, and not everyone is enthusiastic about it





Generative artificial intelligence (AI), capable of producing and examining images, text, audio, videos, and other forms of data, is becoming more prevalent in the healthcare industry. Large technology companies and emerging businesses are both driving this trend.

Google Cloud is partnering with Highmark Health, a nonprofit healthcare company based in Pittsburgh, to develop generative AI tools that aim to customize the patient intake process. Amazon’s AWS division is collaborating with undisclosed clients to explore the application of generative AI in analyzing medical records for “social determinants of health.” Microsoft Azure is assisting Providence, a non-profit healthcare network, in constructing a generative AI system that can automatically prioritize and assign messages from patients to care professionals.

Notable generative AI businesses in the healthcare industry consist of Ambience Healthcare, which is currently working on a generative AI application for medical professionals; Nabla, an ambient AI assistant for practitioners; and Abridge, which specializes in building analytics tools for medical documentation.

The widespread interest in generative AI is shown in the investments made in generative AI initiatives focused on healthcare. Healthcare businesses utilizing generative AI have collectively secured tens of millions of dollars in venture capital thus far. Furthermore, the overwhelming majority of health investors acknowledge that generative AI has had a substantial impact on their investment strategy.

However, there is a divergence of opinions among both experts and patients on the readiness of healthcare-focused generative AI for widespread deployment.

There is a possibility that people may not desire generative AI
Only 53% of customers in the United States expressed confidence in the ability of generative AI to improve healthcare, according to a recent survey by Deloitte. This includes improving accessibility and reducing waiting times for appointments. Less than 50% of respondents expressed anticipation that generative AI would lead to a reduction in the cost of medical treatment.

Andrew Borkowski, the chief AI officer of the VA Sunshine Healthcare Network, which is the largest health system of the U.S. Department of Veterans Affairs, believes that the pessimism is justified. Borkowski cautioned that the implementation of generative AI may be premature due to its substantial limitations and worries over its effectiveness.

“According to him, a major problem with generative AI is its incapacity to effectively address intricate medical inquiries or urgent situations,” he informed. “Due to its limited knowledge base, which lacks current clinical information, and its lack of human expertise, it is not suitable for offering comprehensive medical advice or treatment recommendations.”

Multiple studies indicate that there is validity to those arguments

A study in the journal JAMA Pediatrics found that ChatGPT, an OpenAI AI chatbot, had an error rate of 83% when diagnosing pediatric disorders. Some healthcare organizations have tested this chatbot for specific purposes. During the evaluation of OpenAI’s GPT-4 as a diagnostic helper, physicians at Beth Israel Deaconess Medical Center in Boston noticed that the model incorrectly identified the top diagnosis in almost two-thirds of cases.

Present-day generative AI systems also have challenges when it comes to handling the medical administrative duties that are an integral part of clinicians’ everyday workflows. GPT-4 had a failure rate of 35% on the MedAlign benchmark, which assesses the ability of generative AI to summarize patient health records and search through notes.

OpenAI and other generative AI suppliers caution against depending on their models for medical guidance. However, Borkowski and other individuals assert that they possess the capability to accomplish further tasks. “Exclusively, depending on generative AI for healthcare may result in incorrect diagnoses, unsuitable treatments, or potentially life-threatening circumstances,” said Borkowski.

Jan Egger, the head of AI-guided therapies at the University of Duisburg-Essen’s Institute for AI in Medicine, which focuses on exploring the uses of developing technology in patient care, expresses the same worries as Borkowski. He asserts that the sole secure method to utilize generative AI in healthcare at present is under the vigilant supervision of a physician.

“The results can be highly inaccurate, and it is becoming increasingly challenging to remain cognizant of this,” Egger stated. Indeed, generative AI can be employed, specifically for the purpose of pre-writing discharge letters. However, it is the duty of physicians to verify and ultimately make the final decision.

Generative AI has the potential to reinforce and maintain preconceptions
Generative AI in healthcare can be particularly detrimental when it perpetuates preconceptions.

A team of researchers from Stanford Medicine conducted a study in 2023 to evaluate the performance of ChatGPT and other chatbots powered by generative AI. The study focused on answering questions related to kidney function, lung capacity, and skin thickness. According to the co-authors, ChatGPT not only provided incorrect answers, but it also perpetuated long-standing false notions about biological distinctions between Black and white individuals. These falsehoods have been known to cause medical professionals to make incorrect diagnoses.

The paradox lies in the fact that the individuals who are most susceptible to discrimination by generative AI in the field of healthcare are also the ones who are most inclined to utilize it.

The Deloitte poll revealed that individuals who do not have healthcare coverage, particularly persons of color, as indicated by a KFF study, are more inclined to utilize generative AI for tasks such as locating a doctor or accessing mental health assistance. If the AI’s suggestions are tainted by bias, it has the potential to worsen disparities in how people are treated.

Nevertheless, certain experts contend that generative AI is making progress in addressing this issue.

According to a study conducted by Microsoft and published in late 2023, researchers reported a remarkable accuracy of 90.2% on four difficult medical benchmarks by utilizing GPT-4. Vanilla GPT-4 was unable to achieve this score. However, the researchers assert that by employing prompt engineering, which involves creating specific prompts for GPT-4 to generate desired results, they managed to enhance the model’s performance by as much as 16.2 percentage points. (It is important to mention that Microsoft is a significant investor in OpenAI.)

Expanding beyond the capabilities of chatbots
However, asking a chatbot for information is not the sole use where generative AI excels. Several academics argue that the field of medical imaging might significantly use the capabilities of generative artificial intelligence.

A group of scientists developed a method known as complementarity-driven deferral to clinical workflow (CoDoC), which they published in a paper in Nature in July. The method is specifically built to determine the optimal circumstances for medical imaging specialists to utilize AI for diagnostics as opposed to conventional techniques. According to the co-authors, CoDoC outperformed specialists and reduced clinical workflows by 66%.

In November, a Chinese research team showcased Panda, an artificial intelligence (AI) algorithm designed to identify possible pancreatic abnormalities in X-ray images. A study demonstrated that Panda exhibits a high level of accuracy in categorizing these lesions, which are frequently identified too late for surgical intervention.

Arun Thirunavukarasu, a clinical research fellow at the University of Oxford, stated that there is no distinct feature that prevents the use of generative AI in healthcare settings.

“In the short- and mid-term, there are practical uses for generative AI technology such as text correction, automatic documentation of notes and letters, and enhanced search capabilities to optimize electronic patient records,” he stated. “If generative AI technology is effective, there is no justification for not immediately implementing it in these types of positions.”

“Rigorous science” refers to the application of a systematic and meticulous approach to scientific research and experimentation.
However, although generative AI demonstrates potential in some limited domains of medicine, experts such as Borkowski highlight the technological and compliance obstacles that need to be addressed before generative AI can be effectively utilized and relied upon as a comprehensive supportive healthcare tool.

“The utilization of generative AI in healthcare raises substantial concerns regarding privacy and security,” stated Borkowski. The inherent sensitivity of medical data and the possibility of its misuse or illegal access present significant dangers to patient confidentiality and the trust placed in the healthcare system. Moreover, the legislative and legal framework for the utilization of generative AI in healthcare is continuously developing. There are still unresolved issues pertaining to liability, data protection, and the involvement of non-human entities in medical practice.

Thirunavukarasu, who is quite optimistic about the use of generative AI in healthcare, emphasizes the importance of having a strong scientific foundation for patient-facing solutions.

“Because there will not be any direct clinician oversight, there needs to be pragmatic randomized control trials that show the clinical benefit in order to justify the use of patient-facing generative AI,” he said. “It is crucial to have effective governance in place in order to address any unforeseen negative consequences that may arise after widespread implementation.”

The World Health Organization has recently issued guidelines that support the use of scientific methods and human supervision in generative AI for healthcare. The guidelines also recommend the implementation of audits, transparency, and effect evaluations on this AI by independent third parties. The objective, as stated in the guidelines of the WHO, is to promote involvement from a varied group of individuals in the creation of generative AI for healthcare. This will give them an opportunity to express their concerns and contribute their ideas at every stage of the process.

“Unless the concerns are adequately resolved and appropriate precautions are implemented,” Borkowski stated, “the extensive adoption of medically generative AI could potentially pose harm to patients and the healthcare industry as a whole.”

Continue Reading