Curated AI Content from Around the Web – Page 3 | The AI Track

Explore another page featuring top AI insights from around the web. This compilation includes the most informative and thought-provoking articles on AI, providing valuable perspectives on the industry’s future. Highlights of this page include [specific topics to be adjusted based on content].

Top AI Insights from Around the Web -Man smiling writing on his desk - Image generated by AI for The AI Track

On this page, you will explore articles covering topics such as how to detect hallucinations in large language models, the role of AI in enhancing quantum computing,  startups advancing healthcare with AI, how AI is used for financial statement analysis, how AI is used as lie detector, and how AI is transforming the music business. Discover insights from leading sources like MIT Technology Review, Forbes, and more.

Is AI conscious? Most people say yes

University of Waterloo

2/7/2024

An AI tool has been able to identify fingerprints from different fingers, belonging to the same person. A team at the US university trained an AI tool to examine 60,000 fingerprints to see if it could work out which ones belonged to the same individual. The researchers claim the technology could identify, with 75-90% accuracy, whether prints from different fingers came from one person.

Forensics

The researchers think the AI tool was analysing the fingerprints in a different way to traditional methods – focusing on the orientation of the ridges in the centre of a finger rather than the way in which the individual ridges end and fork which is known as minutiae. Prof Lipson said both he and Gabe Guo, an undergraduate student, were both surprised by the outcome. «All we can say is that as far as we are aware, no two people have yet to demonstrate the same fingerprints».

Crime scenes

The Columbia University team, none of whom have forensic backgrounds, admitted that more research was needed. AI tools are typically trained on vast amounts of data and many more fingerprints would be required to develop this technology further. Additionally, all the fingerprints used to develop the model were complete prints and of good quality, whereas often in the real world partial or poor prints are more likely to be found. «Our tool is not good enough for deciding evidence in court cases but it is good for generating leads in forensics investigations,» claimed Mr Guo.

But Dr Sarah Fieldhouse, associate professor of forensic science at Staffordshire University, said she did not think the study would have «significant impact» on criminal casework at this stage. She said there were questions around whether the markers the AI tool was focusing on remained the same depending on how the skin twisted as it came into contact with the print surface, and also whether they remained the same over the course of a lifetime, like traditional markers do. But this could be tricky to answer as the researchers are uncertain about exactly what the AI is doing, as is the case with many AI-driven tools. The Columbia University study has been peer-reviewed and will be published in the journal Science Advances on Friday.

BBC

11/1/2024

The article discusses the significant energy consumption of AI technologies and their environmental impact, highlighted by an IEA forecast. It explores the resource-intensive nature of training and operating AI models, like those used in generative AI, which contribute to increased global energy demand. The discussion also covers the ecological challenges posed by AI-driven data centers and the broader implications for sustainable practices in technology use.

Nature

24/7/2024

Key Takeaway: Model collapse is a degenerative process in AI models trained on recursively generated data, leading to loss of information about the true data distribution and a convergence to a less accurate and highly biased model.

Key Points:

  • Model Collapse Definition: When AI models are trained on data generated by previous models, they progressively lose information about the original data distribution, starting with rare events (early collapse) and eventually misrepresenting the entire distribution (late collapse).
  • Sources of Error: Statistical approximation error, functional expressivity error, and functional approximation error all contribute to model collapse.
  • Empirical Evidence: Experiments with language models like OPT-125m show increased perplexity and degraded performance over generations when trained on recursively generated data.
  • Real-World Implications: Continuous use of LLM-generated data can lead to significant inaccuracies, stressing the need for genuine human-produced data to maintain model reliability and fairness.

Why This Matters: Model collapse poses a significant threat to the accuracy and reliability of AI models, especially as AI-generated content becomes more pervasive. This emphasizes the importance of preserving and incorporating original human-generated data to ensure AI models remain robust and unbiased.

CNBC

25/7/2024

Visa has leveraged AI and machine learning to combat $40 billion in fraudulent activity over the past year. The company’s AI-driven tools, such as the Visa Account Attack Intelligence (VAAI) Score, help identify and prevent fraud in real-time. This technology evaluates transactions quickly, reducing false positives and enhancing the accuracy of fraud detection, ensuring a safer and more secure payment ecosystem.

Rolling Stone

June 27, 2024

Key Takeaway:

Universal Music Group (UMG) is partnering with AI music tech startup SoundLabs to offer its artists AI voice model technology, ensuring ethical use and artist control over their digital likenesses.

Key Points:

  • Partnership Announcement: Universal Music Group (UMG) has partnered with SoundLabs, an AI music tech startup, to provide AI voice cloning technology to its artists.
  • MicDrop Feature: SoundLabs will introduce a feature called MicDrop, allowing artists to create voice models using their own data. These models will remain under the artist’s control and not be accessible to the public.
  • Voice-to-Instrument and Language Transposition: MicDrop will include features such as voice-to-instrument conversion and language transposition, enabling artists to overcome language barriers in their music.
  • Controversies and Legal Actions: AI voice cloning has previously led to controversies, such as the viral song “Heart On My Sleeve” featuring AI-generated vocals of UMG artists Drake and The Weeknd, and Drake’s own use of a Tupac voice clone, which resulted in legal action.
  • Ethical Use of AI: The music industry, including UMG, has emphasized the ethical use of AI, with UMG publishing its Principles for Music Creation With AI. The RIAA has also launched the Human Artistry Campaign advocating for similar ethical considerations.
  • Example of AI Use: Randy Travis’s recent single “Where That Came From” used AI to resurrect his voice with the help of singer James Dupré, demonstrating an ethical application of the technology.
  • SoundLabs and BT: SoundLabs was founded by Grammy-nominated composer BT, who has worked with numerous high-profile artists. Both BT and UMG highlighted the importance of ethical AI use in their joint statement.

Why This Matters:

The partnership between UMG and SoundLabs marks a significant step in the ethical use of AI in the music industry. By ensuring artists have control over their digital likenesses and providing innovative tools like MicDrop, UMG aims to balance technological advancements with respect for artists’ rights. This initiative could set a precedent for the responsible integration of AI in creative fields, fostering innovation while safeguarding artistic integrity.

Nature

June 19, 2024

Key Takeaway:

Semantic entropy-based methods effectively detect hallucinations in large language models, improving the accuracy and reliability of AI-generated outputs.

Key Points:

  • Hallucination Issue: Large language models often generate false outputs, risking reliability in critical fields like law and medicine.
  • Entropy-Based Detection: A method using entropy-based uncertainty estimators detects confabulations in LLMs without task-specific data.
  • Semantic Entropy: Measures uncertainty at the meaning level, improving the detection of unreliable answers compared to traditional methods.
  • Performance Metrics: Demonstrates superior performance with AUROC scores of 0.92 and AURAC scores of 0.85 across multiple datasets.
  • General Applicability: Effective across various datasets, enhancing question-answering accuracy by identifying likely false outputs.

Why This Matters:

Detecting hallucinations in LLMs is crucial for their dependable use in sensitive and high-stakes applications.

arXiv

10 Jun 2024

The Personal Health Large Language Model (PH-LLM) is fine-tuned from Gemini to analyze and interpret numerical time-series personal health data from mobile and wearable devices. It performs tasks such as generating personalized health insights, domain knowledge evaluation, and predicting self-reported sleep outcomes. PH-LLM achieved high accuracy in fitness and sleep medicine tasks, matching or exceeding expert performance. This demonstrates the model’s potential in personal health applications, though further development is needed for safety-critical use.

Earth

July 13, 2024

Key Takeaway: Researchers at CSIRO in Australia are using AI to address quantum errors, a significant challenge in quantum computing, by processing and correcting ‘qubit noise.’

Key Points:

  • AI helps correct quantum errors caused by environmental interference and system imperfections.
  • Qubits can represent multiple states simultaneously, providing immense computational power.
  • AI-based error correction improves the reliability and performance of quantum computers.
  • Future potential includes reducing physical error rates to achieve fault-tolerant quantum computing.

Why This Matters: Combining AI with quantum computing could revolutionize technology, making quantum computers more practical and efficient.

Neuroscience News

June 2024

Key Takeaway: The article explores the integration of neuroscience principles with artificial neural networks (ANNs) to enhance machine learning models’ learning efficiency and adaptability.

Key Points:

  • NeuroAI aims to incorporate biological learning mechanisms into ANNs.
  • The approach emphasizes brain-inspired algorithms to improve learning efficiency.
  • Potential applications include more adaptive and robust AI systems, bridging the gap between biological and artificial intelligence.

Why This Matters: Understanding and mimicking the brain’s learning processes can lead to more efficient, adaptable, and intelligent AI systems, advancing both neuroscience and artificial intelligence research.

Paper

May 2024

Researchers from the University of Chicago have demonstrated that GPT-4, an advanced language model, can outperform human analysts in financial statement analysis. This breakthrough suggests significant potential for AI in financial decision-making, leveraging its ability to recognize patterns and generate insights from structured data.

Key Points:

  • GPT-4 achieved higher prediction accuracy than human analysts in forecasting corporate earnings.
  • The model utilized “chain-of-thought” prompts to emulate human reasoning.
  • Despite numerical analysis challenges, GPT-4 matched specialized ML models.
  • The study highlights AI’s disruptive potential in the financial sector.

MIT Technology Review

July 5, 2024

The article discusses how AI’s productivity gains can enable the viability of products and business models that were previously unsustainable. Traditional sales-led products priced at $500, which couldn’t support the high costs of sales and support teams, might now be feasible due to AI’s ability to automate sales, provide 24/7 support, and enhance customer retention. This shift represents AI’s second-order effects, where AI doesn’t just save money for incumbents but also creates opportunities for new businesses that leverage AI to achieve a sustainable cost structure.

Reuters

July 12, 2024

Key Takeaway: OpenAI is working on a project code-named “Strawberry” to enhance the reasoning capabilities of its AI models. The project aims to enable AI to autonomously navigate the internet, conduct deep research, and handle complex, long-term tasks, significantly advancing AI’s problem-solving abilities.

Key Points:

  • Project Strawberry: Focuses on improving AI’s reasoning and problem-solving skills, potentially achieving human-like or super-human intelligence.
  • Post-Training Process: Strawberry involves fine-tuning pre-trained models to enhance performance, similar to Stanford’s “Self-Taught Reasoner” (STaR) method.
  • Advanced Capabilities: AI models developed under Strawberry aim to conduct autonomous web research and perform tasks typically done by engineers.
  • Long-Horizon Tasks: The project focuses on enabling AI to plan and execute a series of actions over extended periods.
  • Internal and External Signals: OpenAI has hinted to developers and external parties about the impending release of significantly advanced reasoning technologies.

Why This Matters: Improving AI’s reasoning capabilities can revolutionize various fields, from scientific research to software development, making AI more autonomous and effective in handling complex, multi-step tasks.

ScienceDaily

July 12, 2024

Key Takeaway: Researchers at the University of East Anglia have developed an AI model that significantly speeds up heart MRI scans, improving diagnosis efficiency and patient care.

Key Points:

  • The AI model analyzes heart MRI scans in seconds, compared to 45 minutes manually.
  • It accurately determines the size and function of the heart’s chambers.
  • Tested with data from 915 patients across multiple hospitals, ensuring robust validation.
  • The model improves diagnosis accuracy and consistency, potentially enhancing treatment decisions.

Why This Matters: This AI advancement can streamline cardiac diagnostics, reduce healthcare costs, and improve patient outcomes.

NC State University

June 12, 2024

Key Takeaway: AI and computer simulations have been used to train robotic exoskeletons to autonomously assist users in saving energy during walking, running, and stair climbing without extensive human testing.

Key Points:

  • A new machine-learning framework autonomously controls wearable robots to improve human mobility and health.
  • This method eliminates the need for extensive human testing, allowing immediate use of exoskeletons.
  • Human trials showed a 24.3% reduction in energy for walking, 13.1% for running, and 15.4% for climbing stairs.
  • The approach also has potential applications for mobility-impaired individuals and could accelerate the development of assistive robots.

Why This Matters: This innovation can significantly enhance mobility assistance, reducing energy expenditure for both able-bodied and mobility-impaired individuals, potentially leading to broader adoption of robotic exoskeletons.

MIT Technology Review

January 25, 2019

Key Takeaway: Analyzing 16,625 AI research papers over 25 years reveals shifting trends in AI techniques, indicating that the dominance of deep learning may soon give way to new methods.

Key Points:

  • Historical Trends: Shift from knowledge-based systems in the 80s to machine learning in the 90s.
  • Neural Networks: Gained prominence in the 2010s, especially post-2012 due to breakthroughs like Hinton’s ImageNet success.
  • Reinforcement Learning: Rising in recent years, influenced by successes like AlphaGo in 2015.

Why This Matters: Understanding these trends helps anticipate future AI research directions and potential paradigm shifts in the field.

Fortune

June 7, 2024

Key Takeaway: EvolutionaryScale, founded by former Meta AI researchers, has secured $142 million in seed funding to develop an AI model, ESM3, that generates novel proteins, aiming to revolutionize drug development and biotechnology.

Key Points:

  • Background: Founded by Alexander Rives and former Meta colleagues after Meta disbanded their AI protein team in 2023.
  • Funding: Raised $142 million from investors including Nat Friedman and NVentures, Nvidia’s venture arm.
  • Technology: ESM3 model trained on nearly 4 billion proteins, can predict protein structures and functions, creating new proteins not found in nature.
  • Applications: Potential to engineer proteins for tasks such as cancer treatment, drug-resistant bacteria, and environmental protection.
  • Innovations: Demonstrated creation of a new fluorescent protein, showcasing the model’s capabilities.
  • Industry Context: Joins other AI-powered biotech initiatives like Google DeepMind’s AlphaFold and Eli Lilly’s partnership with OpenAI.

Why This Matters: EvolutionaryScale’s advancements in protein generation using AI could lead to significant breakthroughs in medical treatments and biotechnology, potentially speeding up the development of new drugs and therapies.

Nature

June 25, 2024

Key Takeaway:

To preserve and revitalize minority languages effectively, machine translation tools must be trained to recognize and incorporate cultural differences, going beyond mere linguistic accuracy.

Key Points:

  • Project Scope: The ‘No Language Left Behind’ project has expanded machine translation to include 200 of the world’s 7,000 languages.
  • Cultural Context: Effective translation requires understanding cultural nuances, which are often overlooked in current large-language-model (LLM) training.
  • Broader Training: There’s a need to broaden the training scope of LLMs to include cultural contexts, not just linguistic data.
  • Language Preservation: Preserving minority languages necessitates tools that respect and understand cultural intricacies.

Why This Matters:

Incorporating cultural differences into machine translation is crucial for accurately preserving and revitalizing minority languages, ensuring they remain vibrant and meaningful in their cultural contexts.

Google

May 29, 2024

Google’s Growth Academy: AI for Health program supports 24 startups from Europe, the Middle East, and Africa in advancing healthcare with AI, enhancing patient care, and accelerating medical research.

Key Points in Bullet Points:

  • Startups like Callyope and FiveLives use AI for mental health and cognitive health assessment.
  • Innovations include AI-driven diagnostics, digital therapeutics, and remote patient monitoring.
  • Participants receive mentorship and technical support from Google experts over a three-month program.
  • The program fosters responsible AI innovation and connects founders with industry leaders.

MIT Technology Review

June 7, 2024

Key Takeaway: AI-powered “black boxes” in operating rooms aim to enhance surgical safety by recording and analyzing all activities to identify and mitigate errors.

Key Points:

  • Grantcharov’s Technology: Records everything in the OR using cameras, microphones, and monitors, then uses AI to analyze the data.
  • Challenges: Privacy concerns and potential legal issues have led to resistance and sabotage by some surgeons.
  • Market Impact: New AI systems are being developed to improve OR efficiency and safety, highlighting a significant shift towards data-driven surgical practices.

Why This Matters: Implementing AI in surgeries can significantly reduce errors, improving patient outcomes and overall surgical safety.

WIRED

17/6/2024

Amazon-Powered AI Cameras Used to Detect Emotions of Unwitting UK Train Passengers

Separate trials have used wireless sensors to detect slippery floors, full bins, and drains that may overflow. The scope of the AI trials, elements of which have previously been reported, was revealed in a cache of documents obtained in response to a freedom of information request by civil liberties group Big Brother Watch. “The rollout and normalization of AI surveillance in these public spaces, without much consultation and conversation, is quite a concerning step,” says Jake Hurfurt, the head of research and investigations at the group. The AI trials used a combination of “smart” CCTV cameras that can detect objects or movements from images they capture and older cameras that have their videofeeds connected to cloud-based analysis.

Between five and seven cameras or sensors were included at each station, note the documents, which are dated from April 2023. One station, London Euston, was due to trial a “suicide risk” detection system, but the documents say the camera failed and staff did not see need to replace it due to the station being a “terminus” station. Hurfurt says the most “concerning” element of the trials focused on “passenger demographics. ” According to the documents, this setup could use images from the cameras to produce a “statistical analysis of age range and male/female demographics,” and is also able to “analyze for emotion” such as “happy, sad, and angry.

AI researchers have frequently warned that using the technology to detect emotions is “unreliable,” and some say the technology should be banned due to the difficulty of working out how someone may be feeling from audio or video. In October 2022, the UK’s data regulator, the Information Commissioner’s Office, issued a public statement warning against the use of emotion analysis, saying the technologies are “immature” and “they may not work yet, or indeed ever.

⬇ Curated AI Insights ⬇

The most important AI News, Trends and Predictions curated by The AI Track team, offering you a comprehensive view of the artificial intelligence landscape.

Tracking what’s next in AI and how will it impact us

Scroll to Top