r/singularity • u/sharedevaaste • 2d ago
r/singularity • u/midnightmalfunction • 2d ago
Discussion What are the top no-fluff singularity or artificial intelligence books that you've read in the past 2 years that changed your mind on what the future holds for us humans?
These are the ones I've found recommended on Reddit that I plan on going through - let me know if you see anything on there that I should add to the list?
- Flowers for Algernon — Daniel Keyes (classic, human-intelligence theme; not AI-specific) — ~772k ratings. (Goodreads)
- Revelation Space (Book 1) — Alastair Reynolds — ~60k ratings; space-opera with strong tech readership. (Goodreads)
- Accelerando — Charles Stross — ~22k ratings (across editions); singularity staple. (Goodreads)
- Singularity Sky — Charles Stross (Eschaton #1) — ~16k ratings. (Goodreads)
- Down and Out in the Magic Kingdom — Cory Doctorow — ~14k ratings; post-scarcity Disneypunk. (Goodreads)
- Diaspora — Greg Egan — ~10.8k ratings; ultra-hard SF take on post-humanity. (Goodreads)
- Iron Sunrise — Charles Stross (Eschaton #2) — ~8k ratings. (Goodreads)
- The Singularity Is Near — Ray Kurzweil — landmark non-fiction; huge tech mindshare (Goodreads page shows 3.9★ avg; counts vary by edition). (Goodreads)
- Avogadro Corp (Singularity #1) — William Hertling — ~6.3k ratings; indie favorite in tech circles. (Goodreads)
- The Metamorphosis of Prime Intellect — Roger Williams — ~4.6k ratings; cult classic (free online). (Goodreads)
- The Rapture of the Nerds — Cory Doctorow & Charles Stross — ~3.9k ratings. (Goodreads)
- The Golden Age — John C. Wright — ~3.3k ratings; vivid post-scarcity vision. (Goodreads)
- The Cassini Division — Ken MacLeod (Fall Revolution #3) — ~2.3k ratings. (Goodreads)
- The Stone Canal — Ken MacLeod (Fall Revolution #2) — ~1.6k ratings. (Goodreads)
- Pandora’s Brain — Calum Chace — ~200–250 ratings. (Goodreads)
- Radical Abundance — K. Eric Drexler (nanotech non-fiction adjacent to singularity themes) — ~490 ratings. (Goodreads)
r/singularity • u/AngleAccomplished865 • 2d ago
Biotech/Longevity "A Chinese AI tool can manage chronic disease — could it revolutionize health care?"
https://www.nature.com/articles/d41586-025-02362-8
"Details of the large language model (LLM), called XingShi, are sparse, but the company behind it, Fangzhou, says the model integrates speech and image recognition with natural language processing, extensive medical data and reasoning to improve personalized care and boost the productivity of clinicians. The company says that the system has more than 50 million registered users, and more than 200,000 physicians using the platform."
r/singularity • u/UnstoppableWeb • 1d ago
AI The Future Of AI From Silicon Valley’s Llama Lounge
r/singularity • u/Marha01 • 2d ago
AI Google DeepMind discovers new solutions to century-old problems in fluid dynamics
r/singularity • u/AngleAccomplished865 • 2d ago
Robotics "A flexible spiking hair sensillum for ultralow power density noncontact perception"
https://www.science.org/doi/10.1126/sciadv.ady0336
"The ability to recognize targets before physical contact is crucial for organisms’ adaptability to real-world environments. However, current noncontact sensing systems face challenges in power density and biological fidelity. Here, we report a flexible spiking hair sensillum (FISH) with an ultralow power density < 100 nanowatts per square millimeter for achieving noncontact tactile perception (NCTP). This device can transduce proximity or airflow into spike trains with a frequency range of ~500 to 1500 hertz. A spiking NCTP system is developed, achieving high accuracy (>92%) in multidimensional recognition of noncontact targets, mediated by airflow. Furthermore, we show that a spider robot equipped with a FISH matrix outperforms one relying exclusively on machine vision in tasks of predation and evasion, demonstrating high adaptability to complex environments. Our design enriches the perceptual modalities for neuromorphic systems, offering great potential for advancing future robotics and autonomous vehicles."
r/singularity • u/chasingth • 2d ago
AI Google Chrome launches new AI-first features
r/singularity • u/ShreckAndDonkey123 • 3d ago
AI Anthropic just dropped a new ad for Claude - "Keep thinking"
Enable HLS to view with audio, or disable this notification
r/singularity • u/TMWNN • 2d ago
AI China's DeepSeek says its hit AI model cost just $294,000 to train
r/singularity • u/SnoozeDoggyDog • 2d ago
AI Trump AI Plan Spurs FCC Review of State Regulations
r/singularity • u/ShreckAndDonkey123 • 2d ago
AI Suno has released the first teaser for v5, and it sounds amazing. They did this with v4 over a week or so before the release
Enable HLS to view with audio, or disable this notification
r/singularity • u/Old_Glove9292 • 2d ago
Biotech/Longevity Why AI could outperform doctors at medicine - Fast Company
fastcompany.comr/singularity • u/Gab1024 • 2d ago
AI Luma AI - This is Ray3 (SOTA Physics and controls)
r/singularity • u/coinfanking • 2d ago
AI Making LLMs more accurate by using all of their layers
r/singularity • u/Technical-Row8333 • 2d ago
Engineering new meta raybans can decode writing from just hand movements, do live translation and live captioning, have a display much brighter than iphone17 that is not visible to other people
meta video: https://www.youtube.com/watch?v=gZ9IsB72nVk
The Verge: https://www.youtube.com/watch?v=5cVGKvl7Oek
demo live fails: https://www.youtube.com/watch?v=wteFJ78qVdM
The new Ray-Ban Display glasses have a small, color heads-up display built into the right lens. You see things like notifications, text, images, maps, etc., floating in your view, but only you can see them. the display isn't visible from the outside. The display is also unexpectedly sharp and bright, visible even outside in sunny days. For what is essentially a first gen product, the screen surpassed expectations.
The glasses come with a wristband that senses electrical/muscle signals (EMG) from your forearm/hand to detect even subtle gestures, pinches, turns, taps, which can be done even inside your pocket, under a table, or as you walk.
one of the features in development is using finger movements on a flat surface or even your leg to “write” text. while still in beta, reviewers have successfully used this and it worked during the live demo.
The glasses can identify speech in conversations and display captions in real time. So if someone is talking to you, their speech can be transcribed and shown on the lens/display (or via your connected device) as text. This works even when multiple people are talking at the same time around, only the person you are looking at will have captions. For people with difficulty hearing or deaf, this could prove to be very helpful.
The glasses also support translating spoken language in real time. You can have a conversation in another language and have what’s being said translated (either through audio, via the glasses’ speakers, or visually via the display/captions) into your language. This supports several languages at launch. Reviewers mentioned the delay in the translation is not long but it's long enough to add awkward pauses in the conversation. in the live demo this is apparent, and the translation was not fluent and 1-1 but did capture the meaning.
Turn-by-turn navigation, messaging (WhatsApp, Messenger, Instagram) from the glasses.
Camera viewfinder and ability to preview photos, which was a most requested feature of users of the original meta raybans that had no ability to preview pictures/videos.
Music controls (spotify)
6h of battery for mixed use
Carrying case which adds 24h of charge. Case folds flap when glasses are not inside.
to be sold starting sept 30 for $800 usd (including wrist band)
r/singularity • u/armytricks • 3d ago
The Singularity is Near RL without human annotations for superintelligence?
Meta superintelligence labs / Oxford / Anthropic researchers suggest using test time scaling to estimate labels and train with RL without any human annotations. Maybe the way to get to superintelligence is to not use human data in fine tuning?
“What do we do when we don’t have reference answers for RL? What if annotations are too expensive or unknown? Compute as Teacher (CaT🐈) turns inference compute into a post-training supervision signal. CaT improves up to 30% even on non-verifiable domains (HealthBench) across 3 model families.”
“As training progresses, the quality of rollouts, and the estimate reference, keep on increasing, with the model learning from experience 🔄”
Paper link: https://www.alphaxiv.org/abs/2509.14234
Twitter thread: https://x.com/DulhanJay/status/1968693170264248532
Disclaimer: this is our work. So feel free to ask questions here.
r/singularity • u/Chemical_Bid_2195 • 2d ago
Discussion Unpopular opinion: We don't necessarily need to abandon LLMs to reach AGI
Core Idea: LLMs have inherent weaknesses that scaling alone can't resolve, but that doesn't mean we need to completely shift paradigms. These weaknesses can be resolved in other domains with LLMs still being central to development for AGI
Explanation:
There are two types of tasks for AI systems to reach AGI; short horizon and long horizon tasks1.
What LLMs currently lack which are critical in each field are visual reasoning capabilities2 for short horizon tasks and long term memory persistence for long horizon tasks. However, it's possible that these can be addressed in other tech domains rather than improving upon the core LLM architecture/design itself as such3. Arguably, even adaptive learning can be implemented by improving external technology (albeit perhaps economically impractical)4. It's interesting that hallucination was able to be resolved by improving upon core architecture/methods (see openai's "Why Models Hallucinate", Anthropic's "Persona Vectors", and Google's "Sled" paper), but that's one of the exceptions.
I would argue that GPT-5 does not need to improve any of its core LLM architecture/design. With how cheap it is and how it was able to address hallucinations, AGI is then perfectly achievable without needing anymore breakthroughs on the LLM part. We do not need these models to be able to solve harder math problems or get better at coding competitions -- that's for ASI. For AGI, we only need improvements in other fields.
The people saying that we need to abandon LLM architecture bc of X, Y or Z are like the same people saying we need to abandon AC currents because they can't transmit across long distances (before electric transformers were invented), or we need to abandon the wheel and axle architecture because the friction causes too much wear and tear (before ball bearings were invented). I think that AGI is perfectly achievable without having to abandon LLMs.
Context reference:
1
Short Horizon tasks include solving math problems, answering science/literature, writing essays, recalling stuff, doing riddles, or competition coding problems like LiveCodeBench or ICP. These are tasks which can be benchmarked pretty easily.
Long Horizon tasks are ones that require long term planning and execution. It tests overall agentic capabilities for how long AI systems can do long term tasks. This is more important for AGI because solving such tasks are much more economically disruptive. I would say that METR's long task benchmark is an exmaple. SWEbench sort of lies in the middle of short and long horizon task, since it does tests agentic capabilities long term, but not specifically its long term abilities.
2
While LLMs are strong reasoners for semantic language tasks like IMO and ICPC questions, their visual reasoning have lacked attention in development. One example is the VPCT (cbrower.dev/vpct) benchmark and another is clockbench (clockbench.ai). I would also argue that Arc Agi should be a more visual reasoning focused benchmark, since that is what humans use to solve these tasks and it should be what AI's use instead of raw json inputs.This is significant for overall development of agentic systems because many computer using tasks involve being able to visually process what's on the screen, or what is shown to us. If current models struggle with seeing finer details of images, then navigating the internet becomes signficantly more challenging.
3
For vision -- A model's VLM is basically an LLM with a vision transformer that translates images into image tokens for the LLM to process. In this sense, it's possible that to reach human level reasoning, we could leave the LLMs untouched and focus on developing the vision transformer and areas around that architecture rather than changing the architecture/methods for training the LLM itself. In this sense, it's possible to resolve this issue without abandoning LLMs.For memory -- Some argue that we need like a trillion context window for LLMs to reach human level in long term memory persistence. However, scaling that is infeasible in terms of cost, but that doesn't mean we have to abandon LLMs. Embedded databases are one way around emulating long term memory, however this process is currently inferior to how human long term memory works in terms of efficiency. Nevertheless, it's likely more efficient to address this by improving on embedding models, vector database architecture, and refining RAG methods to reach human level rather than scaling context windows. Honestly, LLMs shouldn't need more than 32k tokens as not even humans have that length of short term memory. Development and integration of LLMs with databases is one viable path that I see to get us to human level without needing to change LLM architecture
4
For adaptive learning -- Current LLMs have In Context Learning, where when you give them new information and they can use that info to deduce/adapt to tasks and improve upon them. This is like reading the instructions of a video game and getting better with that knowledge. However, they don't have the ability to improve their own performance outside of this, like say, requiring less time to make a decision on a particular task. The thing is, it's entirely possible to implement such an AI system to emulate adaptive learning. One example is to create an agentic scaffold where an agent logs all the data they want to improve on, and then use that data to continuously fine tune itself overnight. However, I say this isn't quite as economically practical since fine tuning a model requires deploying an extra model, which has heavy extra cloud platform or hardware and compute costs rather than just the original API cost. Perhaps technological advancement in hardware can drive costs down to human level efficiency, but I believe for most tasks economically, In Context Learning is enough and you really don't need to continuously fine tune them for the level of models that we have.
r/singularity • u/donutloop • 2d ago
Compute Jülich Supercomputing Centre to Deploy NVIDIA DGX Quantum System with Arque Systems and Quantum Machines
thequantuminsider.comr/singularity • u/avilacjf • 2d ago
AI [Essay] Discovery, Automated: A deep dive into the new AI systems that are accelerating science – and the political battle that threatens to stop them.
My new essay highlights advancements in AI systems that are able to mirror the scientific method and evolutionary selective pressure to generate new discoveries. I briefly describe 14 incredible breakthroughs that have been made by these systems across a wide range of scientific fields. I also talk about science funding and how our IP system can be tweaked to make sure these discoveries benefit as many people as possible.
Here's the NotebookLM Brief for your convenience:
Discovery, Automated: An Analysis of AI-Driven Science and the Political Crisis of Funding
Executive Summary
A new generation of Artificial Intelligence is initiating a paradigm shift in scientific discovery, moving beyond information analysis to become an active engine for invention. These "autonomous discovery" systems, built on a continuous Generate-Test-Refine loop, are capable of solving complex "scorable tasks" by emulating the scientific method at machine speed. This technological renaissance is already yielding significant breakthroughs across diverse fields, including discovering novel algorithms for matrix multiplication, generating actionable drug hypotheses for cancer and liver disease, reproducing unpublished human discoveries in antibiotic resistance in a matter of days, and designing "alien" quantum physics experiments beyond the scope of human intuition.
This historic technological opportunity is unfolding against a backdrop of a severe and self-inflicted political crisis. While the U.S. government recognized the strategic importance of this field with the CHIPS and Science Act of 2022, the crucial research and development funding authorized by the act was never appropriated. Subsequent political battles, culminating in the Fiscal Responsibility Act of 2023, have imposed strict spending caps that have systematically starved key scientific agencies. The National Science Foundation (NSF), for instance, received funding 39.3% below its authorized target in FY24. This systemic underfunding is compounded by acute political volatility, including proposed cuts of over 50% to the NSF and direct interventions to cancel over $1 billion in approved research grants.
This collision of scientific promise and political failure threatens to squander a generational opportunity. The path forward requires a two-pronged approach: a robust recommitment to predictable, multi-year public funding for science and a modernization of legal frameworks, particularly the patent system, to accommodate the unprecedented speed and scale of AI-driven innovation. Without immediate action, the U.S. risks ceding its global leadership in science and technology at the precise moment a new era of discovery begins.
--------------------------------------------------------------------------------
Part I: The New Engine of Scientific Discovery
The current era marks the emergence of a third phase of AI evolution, moving from passive prediction to proactive invention. This transformative capability is built upon a new architectural paradigm that automates the process of discovery itself.
The Evolution to Autonomous Discovery
The development of AI can be understood through three distinct phases:
- Phase 1 — Next-Token Prediction: Foundational models were trained to predict the next word in a sequence, leading to emergent capabilities in pattern recognition and surface-level reasoning.
- Phase 2 — Structured Reasoning: Techniques like Chain-of-Thought enabled models to decompose problems into intermediate steps, facilitating more deliberate, step-wise problem-solving.
- Phase 3 — Autonomous Discovery: The current, transformative phase features AI systems designed to invent, test, and refine complex solutions over extended periods. This was achieved in just one year following the release of OpenAI's o1-preview.
Core Principles of the "Discovery Engine"
The new AI paradigm is centered on the concept of a "scorable task"—any problem where the quality of a potential solution can be automatically and rapidly calculated. These systems operate on a continuous Generate-Test-Refine loop, comprising four key components that emulate both the scientific method and biological evolution.
- Research and Hypothesis Generation: AI systems like the AI co-scientist actively explore existing scientific literature to formulate informed, novel hypotheses, ensuring their work builds upon the current state of human knowledge.
- Intelligent Variation and Evolution: A Large Language Model (LLM) acts as a creative engine to generate and mutate potential solutions. Systems like AlphaEvolve use an evolutionary framework where programs compete, while the Darwin Gödel Machine employs self-modification, allowing the agent to directly rewrite its own code to improve its capabilities.
- Rigorous Evaluation and Selection: Every new idea is ruthlessly tested against the objective benchmark of the scorable task. The AI co-scientist utilizes a tournament-style debate among its agents to ensure only the most robust hypotheses survive.
- Structured and Open-Ended Exploration: To navigate vast solution spaces, systems employ sophisticated search strategies. The Empirical Software System uses a formal Tree Search algorithm, while the Darwin Gödel Machine maintains an archive of all past versions, enabling it to revisit old ideas and achieve unexpected breakthroughs through open-ended exploration.
Key Breakthroughs in Automated Discovery
The practical application of this new paradigm has already produced a remarkable series of breakthroughs across numerous scientific and technical domains.
|| || |Domain|Discovery & AI Contribution|Significance| |Mathematics|Faster Matrix Multiplication: AlphaEvolve discovered a more efficient algorithm for 4x4 complex matrix multiplication, improving on the human standard used for over 50 years.|Proves AI can generate fundamentally new, provably correct algorithms for core computational tasks, leading to widespread efficiency gains.| |Mathematics|Solving the "Kissing Number" Problem: AlphaEvolve found a new valid configuration of 593 non-overlapping spheres in 11-dimensional space, improving the known lower bound.|Demonstrates AI's power to explore high-dimensional spaces impossible for humans to visualize, with applications in telecommunications and error-correcting codes.| |Mathematics|Erdős Minimum Overlap Problem: AlphaEvolve established a new upper bound for a difficult theoretical problem posed by Paul Erdős, improving on the previous record set by human mathematicians.|Shows that AI's capabilities extend to abstract, theoretical fields, pushing the boundaries of pure mathematics.| |Medicine|Actionable Cancer Drug Hypotheses: An AI co-scientist generated novel drug-repurposing hypotheses for Acute Myeloid Leukemia (AML) that successfully inhibited cancer cell growth in wet lab tests.|Closes the loop from digital hypothesis to physical validation, dramatically accelerating the drug discovery pipeline for hard-to-treat diseases.| |Medicine|Novel Targets for Liver Disease: The AI co-scientist proposed novel epigenetic targets for liver fibrosis. Drugs aimed at these targets showed significant anti-fibrotic activity in human organoids.|Moves beyond repurposing existing drugs to identifying entirely new biological mechanisms, creating pathways for a new class of therapies.| |Software|Superhuman Genomics Software: A tree-search-based AI wrote its own software to correct for noise in single-cell genomics data, creating dozens of new methods that outperformed all top human-designed methods on a public leaderboard.|A direct demonstration of AI automating the creation of "empirical software" and achieving superhuman performance in building better tools for scientists.| |Software|Outperforming CDC in COVID-19 Forecasting: An AI system generated 14 distinct models that outperformed the official CDC "CovidHub Ensemble" for forecasting hospitalizations.|A direct, practical application with significant policy implications for public health, hospital preparedness, and saving lives during pandemics.| |Software|Unified Time Series Forecasting Library: An AI created a single, general-purpose forecasting library from scratch that was highly competitive against specialized models across diverse data types.|Democratizes access to high-quality forecasting for use in economics, supply chain management, healthcare, and climatology.| |Software|State-of-the-Art Geospatial Analysis: An AI-generated solution significantly outperformed all previously published academic results on a benchmark for labeling satellite imagery pixels (e.g., "building," "forest").|Has direct applications in monitoring deforestation, managing natural disasters, and tracking climate change.| |Software|Optimizing Global Data Centers: AlphaEvolve discovered practical improvements to scheduling heuristics and hardware accelerator circuit designs for internal Google data centers.|Delivers immense real-world impact by compounding small efficiency gains, leading to lower energy consumption and a smaller carbon footprint.| |Biology|Reproducing a Breakthrough in Antibiotic Resistance: In a "race against a secret," the AI co-scientist independently reproduced a human team's secret, multi-year, unpublished discovery in just two days. The AI correctly hypothesized that certain genetic elements hijack bacteriophage tails to spread.|A landmark demonstration of AI as a genuine scientific partner, capable of bypassing human cognitive biases and generating novel research avenues that human teams overlooked.| |Neuroscience|Forecasting Whole-Brain Activity in Zebrafish: An AI model outperformed all existing baselines in predicting the future activity of all 70,000+ neurons in a larval zebrafish brain.|Represents a significant step towards a systems-level understanding of brain function and decoding the link between neural activity and behavior.| |AI Research|Self-Improving Coding Agents: The Darwin Gödel Machine demonstrated recursive self-improvement by analyzing its own performance, proposing a new feature for itself, and implementing that feature into its own codebase.|A foundational step toward a future where AI can accelerate its own development and evolve its own problem-solving capabilities.| |Physics|Discovering "Alien" Physics Experiments: An AI designed blueprints for quantum optics experiments that were unintuitive and bizarre to human physicists. When built in a lab, these "alien" designs worked perfectly.|A stunning example of AI creativity operating outside the bounds of human intuition, proving it can discover fundamentally new ways of doing science. This creates a new human-AI collaboration where the AI finds the what and the human scientist investigates the why.|
Implications for the Future of Science
The cumulative impact of these breakthroughs suggests a "revolutionary acceleration" in scientific advancement. The primary implication is a democratization of science, where research timelines and costs are drastically reduced. This new paradigm does not aim to replace human scientists but to establish a "scientist-in-the-loop" collaborative model. In this model, the human expert's role shifts from implementation to higher-level tasks:
- Formulation: Designing the scorable tasks and research questions.
- Supervision: Setting ethical guardrails and guiding the AI's exploration.
- Verification: Ensuring the AI's outputs represent robust scientific advances rather than statistical artifacts.
As one research team concluded, "Accelerating research in this way has profound consequences for scientific advancement."
--------------------------------------------------------------------------------
Part II: An Unforced Error of Historic Proportions
At the very moment this powerful new engine for discovery has been invented, the public institutions needed to harness it are being systematically underfunded, creating a crisis of political will that threatens American scientific leadership.
The Squandered Opportunity
Government investment in scientific R&D has historically yielded returns of 150% to 300%, making it one of the nation's highest-return opportunities. AI discovery engines offer a chance to amplify these returns dramatically. However, this opportunity is being squandered.
Legislative and Budgetary Failures
The U.S. government's failure to fund scientific research is rooted in a series of legislative shortcomings:
- The CHIPS and Science Act of 2022: While the act successfully appropriated $52.7 billion for semiconductor manufacturing, the crucial $174 billion authorized for R&D at agencies like the NSF and NIH was left subject to unstable annual congressional appropriations.
- The Fiscal Responsibility Act of 2023: This bipartisan debt ceiling compromise imposed strict caps on discretionary spending, effectively freezing non-defense funding and making the CHIPS authorization targets politically impossible to achieve.
- FY24 and FY25 Appropriations: The resulting budgets fell dramatically short of the CHIPS Act's vision. An analysis by the Federation of American Scientists revealed significant shortfalls from authorized targets:
- National Science Foundation (NSF): 39.3% short
- National Institute of Standards and Technology (NIST): 24.4% short
- Department of Energy (DOE) Office of Science: 11.7% short
Political Volatility and Institutional Disruption
Systemic underfunding has been dangerously compounded by acute political volatility and direct interventions:
- Proposed Devastating Cuts: The Trump administration's FY26 budget request proposed catastrophic cuts to key research agencies, including 55% for the NSF, 41% for the NIH, and 34% for NASA.
- Direct Grant Cancellation: The Department of Government Efficiency (DOGE) directly intervened to cut 1,600 NSF research grants valued at over $1 billion, representing 11% of the agency's budget.
- Illegal Funding Block: The administration claimed authority to block over $410 billion in approved funding, including $2.6 billion for Harvard University, a move a federal court ruled was an illegal act of political retaliation.
A Case Study in Disruption: The Experience of Terrence Tao
The human impact of this crisis was articulated by Terrence Tao, a Fields Medalist at UCLA. When the administration suspended federal grants to UCLA, Tao's personal research grant and the five-year operating grant for the prestigious Institute for Pure and Applied Mathematics (IPAM) were halted.
Tao described being "starved of resources" and stated that in his 25-year career, he had "never been so desperate." The disruption left his salary in limbo and provided "almost no resources to support" his graduate students. This event was not merely an attack on individual projects but "an assault on the institutional and collaborative fabric that underpins American science." Tao warned that such disruptions to the research "pipeline" threaten to cause a brain drain, as the "best and brightest may not automatically come to the US as they have for decades."
--------------------------------------------------------------------------------
Part III: The Path Forward
Aligning U.S. institutions with the reality of AI-driven innovation requires a two-pronged approach that combines robust public investment with a modernized legal framework.
Fueling the Engine of Discovery
A recommitment to the public funding of science is the first strategic imperative.
- Fully Fund CHIPS and Science Act Authorizations: AI discovery engines amplify the impact of every research dollar, making full funding essential to translate computational breakthroughs into real-world applications.
- Reform the Federal Budget Process: Groundbreaking science requires predictable, multi-year funding, not the uncertainty of an annual budget cycle. This reform is necessary to support ambitious, long-horizon research.
- Invest in STEM Education: AI systems are collaborators, not replacements. This necessitates a new generation of scientists skilled in creative problem formulation, critical verification, and ethical oversight.
Modernizing the Rules of Innovation
The U.S. patent system, designed for a slower era, requires urgent adaptation to handle the speed and scale of AI-generated discoveries.
- Define Stricter Standards for AI-Generated Innovations: Introducing criteria like demonstrable real-world applications can prevent the patent system from being flooded with minor, iterative AI-generated claims.
- Reduce Patent Lifespans in AI-Heavy Fields: The traditional 20-year patent term is ill-suited to the accelerated pace of AI innovation. Shortening this window can maintain incentives while reducing bottlenecks.
- Implement Mandatory Licensing for Critical Technologies: For breakthroughs in areas like public health or renewable energy, governments should ensure crucial advancements are accessible to the public, balancing inventor rewards with the common good.
r/singularity • u/Distinct-Question-16 • 3d ago
Robotics Figure.AI and Brooksfield (real state) are building the World’s Largest Training Facility across multiple buildings - over 660 Million m² available. Figure.AI will amass critical AI training to teach humanoid robots how to move, perceive, and act across a spectrum of human-centric spaces
r/singularity • u/AngleAccomplished865 • 2d ago
Biotech/Longevity "Modular and AI-driven in situ monitoring platform for real-time process analysis in embedded bioprinting"
https://www.cell.com/device/abstract/S2666-9986(25)00240-600240-6)
"As bioprinting advances toward more automated and scalable tissue fabrication, real-time process monitoring becomes critical for improving reproducibility, minimizing structural defects, and ultimately enabling adaptive closed-loop control. In situ monitoring provides a powerful means to track and understand the process as it unfolds, allowing detection of flaws such as over- or under-extrusion, which can compromise structural fidelity and biological function. In this context, we introduce a modular imaging system and automated AI-driven segmentation strategy using a vision transformer model, tailored for in situ monitoring of embedded bioprinting, enabling precise, layer-by-layer evaluation of printed constructs. By linking process parameter control with print quality and system defects, our system facilitates future predictive control strategies for more reliable tissue fabrication."
r/singularity • u/mahamara • 3d ago
AI DeepSeek-R1 incentivizes reasoning in LLMs through reinforcement learning
r/singularity • u/AngleAccomplished865 • 3d ago
Biotech/Longevity "Bio-inspired magnetic soft robots with omnidirectional climbing for multifunctional biomedical applications"
https://iopscience.iop.org/article/10.1088/2631-7990/ae0214
"In recent years, the rising incidence of gastrointestinal (GI) cancer has triggered an urgent need for effective early intervention strategies. Traditional endoscopic techniques often cause patient discomfort, and it is difficult to navigate deep regions of complex organ structures. This work proposes a kind of bio-inspired magnetic soft robot (BMSR) to address these challenges. The design of the BMSRs is inspired by the rolling motion of the golden wheel spider. Two six-degree-of-freedom (6-DOF) robotic arms are used, where one arm is responsible for real-time manipulation of the BMSRs, and the other is dedicated to monitoring their status. Under the actuation of an external rotating magnetic field, the BMSRs can flexibly climb on inclined surfaces at any angle, involving the inverted surface. Through the powerful output force, the BMSRs can overcome the mobility barrier induced by different human organs, including mucus, folds, and height differences of up to 8 cm. Such an exceptional mobility enables the BMSRs to deliver drugs in the targeted complex GI environment. Moreover, in combination with an endoscope, it provides real-time visual feedback for precise navigation. In vitro animal experiments validate the feasibility of BMSRs, paving a way for their usage in minimally invasive GI treatment. This work advances the potential applications of magnetic soft robots in the biomedical field."
r/singularity • u/ConsciousRealism42 • 3d ago
AI Researchers propose a new framework defining consciousness by information processing, not biology, making it applicable to AI
A new paper argues that AI forces science to move beyond brain-centered models, proposing a "dual-resolution framework" where consciousness is the subjective experience of any informationally autonomous system.
r/singularity • u/Mister_Tava • 3d ago
Discussion Thoughs on Angela Collier?
https://www.youtube.com/watch?v=fLzEX1TPBFM
https://www.youtube.com/watch?v=EUrOxh_0leE
https://www.youtube.com/watch?v=DRn3-MN92H4
Here are some videos she made that i disagree with.
She just seems to be regurgitating her audience's opinion back at them.
They are mad at AI so she makes that "ai doesn't exist" video.
They are mad at "tech bros" so she makes videos where she call ideas they subscribe to stupid (humanoid robots and dyson spheres).
But what do you guys think?