Sunday, July 6, 2025

Latest Tech News

  • Benchmarks show AMD’s new EPYC 4005 series outperforming older eight-channel EPYC 7601 memory systems with just two DIMMs
  • Performance-per-Watt improvements put AMD’s 4005 chip in a new league of server efficiency
  • Grado proves newer design beats older bulk - less memory, lower power, yet more performance

In an eight-year leap, AMD’s new EPYC 4585PX processor from the EPYC 4005 “Grado” series has shown performance improvements that nearly triple the output of AMD’s original flagship server chip, the EPYC 7601.

Interestingly, the EPYC 4585PX processor is not part of the high-end EPYC 9005 family but rather a lower-cost, power-efficient alternative.

According to Phoronix, over 200 benchmarks were run on Ubuntu 25.04 across varied workloads, server tasks, HPC, scripting, media encoding, and compilation.

Benchmarks highlight a dramatic efficiency jump

On average, the EPYC 4585PX delivered 2.69 times the performance of the original 7601, despite fewer memory channels and a more compact setup.

When adjusted for power, the improvement looks even more striking: on a performance-per-Watt basis, the newer chip is 2.85x more efficient, thanks to more refined architecture and improved design efficiency.

These results are likely to interest enthusiasts of the best server hardware, and they raise questions about how far older enterprise systems have fallen behind.

It also puts AMD’s lower-cost chips in contention with more expensive processors typically used by top-tier web hosting providers.

Not everything is a clean win, however. While the wall power usage of the full system was significantly improved - 225W for the newer platform compared to 238W for the older Naples server - the CPU-level measurements were less decisive.

Average CPU consumption was 153W for the EPYC 4585PX and 141W for the older 7601, with peak values of 204W and 195W, respectively.

These figures suggest that while the system as a whole has become more efficient, the processor alone hasn’t cut energy use as dramatically.

For those seeking green infrastructure, especially small business operators or SOHO setups, the lower idle draw may be more relevant than full-load comparisons.

Running on a modern Supermicro platform with just two DDR5 DIMMs, the EPYC 4585PX system still managed to beat the eight-channel memory performance of the EPYC 7601 in most workloads.

That suggests memory bandwidth isn’t the only performance determinant anymore.

With support for newer chipsets and more efficient memory, the “Grado” system appears to offer real headroom for entry-level infrastructure deployments, especially for NAS builds where power efficiency and thermal limits matter.

The data shows AMD’s low-cost EPYC 4005 chips may now outperform former flagships without breaking the bank or the power budget.

The upcoming comparison with EPYC 9005 chips promises even greater gains, although the takeaway for now is that you no longer need a premium part to get premium performance.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/4znFijZ

Saturday, July 5, 2025

The Car Battery Jump Starter I Recommend to Everyone Is 40% Off With This Remaining July 4th Deal

The Powrun P-One jump starter is surprisingly affordable, and it helps me stay one step ahead of car troubles.

from CNET https://ift.tt/XidNnbv

How to Watch the Jack Catterall vs. Harlem Eubank Fight Live

This marks Catterall's first match as a welterweight.

from CNET https://ift.tt/nwReS7N

Latest Tech News

  • Seagate’s 30TB Exos M is helium-filled and built for data centers, not home PCs
  • 2.5 million hours MTBF sounds great until you realize how specific this use case is
  • The IronWolf Pro HDD targets NAS users, not hyperscale cloud infrastructure like Exos M

A new listing for Seagate’s 30TB Exos M hard disk drive has appeared online, offering what is currently the largest HDD available for under $620.

ServerSupply lists the drive at $650, but applying the site’s 5% discount brings the price down to $617.50.

Seagate’s Exos M (model ST30000NM004K) is a helium-sealed 3.5-inch internal hard drive built around conventional magnetic recording (CMR) technology.

Enterprise-grade capacity at an unexpectedly low price

With a 7200 RPM spindle speed and a 512MB multi-segmented cache, it delivers a sustained data transfer rate of up to 275MB/s.

The drive supports a SATA interface and is hot-plug capable. According to Seagate, it is designed for high-capacity use cases including hyperscale data centers, enterprise backup systems, and distributed file storage frameworks like Hadoop and Ceph.

The manufacturer also reports a mean time between failures (MTBF) of 2.5 million hours and an annualized failure rate of just 0.35%, suggesting this model is meant for non-stop, 24/7 operation.

Additional features include PowerBalance and PowerChoice technologies for more efficient energy management, and RSA 3072 firmware verification for security.

These specifications strongly indicate that the Exos M is tailored toward enterprise infrastructure, not the typical desktop setup.

Another Seagate 30TB drive also appears in listings and shares many of the same core specifications. Provantage lists the IronWolf Pro ST300000NT011 HDD for a slightly higher $669.69, still an affordable price for a drive of this capacity.

Although they share similarities (30TB, CMR, 7200 RPM), their firmware, vibration tolerance, and workload optimizations will likely differ because Exos is tuned for hyperscale environments, while IronWolf Pro is optimized for NAS workloads.

Despite the attention-grabbing capacity, calling Seagate Exos M the best HDD depends entirely on context.

For cloud infrastructure and archival storage, it may represent strong value, particularly at this price.

But for everyday users, its 3.5-inch form factor, 7200 RPM speed, and enterprise-oriented feature set make it impractical.

Ultimately, the Seagate Exos M is a highly specialized product, but its pricing makes it look accessible.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/93RVeJT

Latest Tech News

  • Flash memory now doubles as secure key storage using conceal-and-reveal method
  • Encryption keys hidden in plain sight in standard commercial 3D NAND memory
  • Machine learning attacks failed to guess the keys, showing true randomness and security

As digital data volume continues to grow with the rise of AI, cloud services, and connected devices, securing that data has become increasingly difficult.

Traditional password-based protections are no longer enough, and while hardware security solutions like Physical Unclonable Functions (PUFs) offer stronger protection, they have struggled with real-world deployment.

Most PUFs require custom hardware and lack the ability to hide keys when not in use, leaving systems exposed.

Unique and unpredictable

A research team at Seoul National University has introduced a new hardware security approach called Concealable PUF. This method uses commercial 3D NAND flash memory, typically found in mainstream storage devices, to create a secure method of storing and hiding encryption keys.

What sets this apart is its ability to hide a key beneath user data and reveal it only when needed. The technique was recently published in Nature Communications.

The key innovation involves a weak application of the GIDL (Gate-Induced Drain Leakage) erase process. This boosts variation between memory cells, making each chip's characteristics unique and unpredictable.

These variations can be used to generate the PUF data that serves as a secure, unclonable key.

With this approach, no structural or circuit changes are required. The method works directly with standard V-NAND flash memory, making it easier to scale.

This could potentially allow hardware-level security to be implemented in everyday consumer electronics without added cost or complexity.

The university says Concealable PUF passed stress tests which included wide temperature ranges and over 10 million read cycles. It also withstood machine learning-based attacks, which could not predict the key beyond random guessing levels.

Impressively, the key was able be concealed and revealed over 100 times without any errors, showcasing the system’s stability.

Professor Jong-Ho Lee, who led the project, said, “Concealable PUF stands out for its creativity and practicality, as it utilizes mass-produced vertical NAND flash memory technology without modifications.”

Lead author Sung-Ho Park added, "This research is significant because it demonstrates how PUFs can be implemented using the erase operation of existing V-NAND flash memory without altering the circuitry or design. By enabling selective exposure of the security key, our method opens up new possibilities for enhancing both security and memory efficiency."

The team plans to extend this technology into other security-focused hardware solutions, targeting industries like IoT, mobile, and automotive electronics.

Via TechXplore

Concealable PUF using GIDL erase on V-NAND flash memory

Concealable PUF using GIDL erase on V-NAND flash memory. (a) Schematic of the concealable PUF using V-NAND flash memory.(b) Circuit diagram of V-NAND flash memory.(c) Description of the GIDL erase method (Image credit: Nature Communications)

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/970pyZa

Friday, July 4, 2025

Your July 4th Weekend Streaming Watch List: 'Sinners,' 'The Old Guard 2' and 'Heads of State'

Don't miss the latest on Max, Netflix and other streaming services. Here's what you should binge this weekend.

from CNET https://ift.tt/DribKGI

Latest Tech News

  • MicroSD card survey tested 200 models to uncover fakes, performance gaps, and endurance failures
  • Fake flash was common in cheap high-capacity cards, discarding data past true limits
  • Name-brand cards generally outperformed off-brand models in speed, reliability, and total write endurance

One man has taken the task of testing microSD cards to a level most users would never entertain.

Over the course of a year, tech enthusiast Matt Cole bought and tested 200 different models, ranging from 8GB to 1TB, with a particular focus on identifying fakes, testing performance, and measuring durability.

Fifty-one of those cards failed during testing.

Writing over 100TB of data per day

Cole is the creator of The Great microSD Card Survey, a deep, evolving benchmark report (and a serious labor of love), that began in July 2023.

He built a testing rig with eight machines and nearly 70 card readers running continuously, writing over 100TB of data per day.

To date, the setup has written more than 18 petabytes of data to the cards under test conditions. Impressively, his entire effort is self-funded, although he does have an Amazon wishlist should anyone wish to buy him further cards to test.

Cole’s goal was to understand how these tiny storage devices differ across brand, price, and origin.

One of his main goals is to identify “fake flash,” where a card tells the host device it has more storage than it really does.

A 1TB card might really only store 8GB. Once that real limit is reached, new data is silently lost. He also highlights “skimpy flash,” where a card is technically real, but provides less usable space than advertised, a common issue even among name-brand cards.

His survey doesn’t stop at capacity. Cole also tested whether cards live up to their advertised speed class ratings, such as U1, U3, or V30.

He ran sequential and random I/O tests, then tracked endurance through repeated write and read cycles.

Some cards survived over 20,000 cycles, while others failed before reaching 500. Temperature monitoring was also part of the process, though it’s still unclear how much heat affects long-term performance.

Among the best microSD cards were the Kingston Canvas Go! Plus 64GB, PNY PRO Elite Prime 64GB, SanDisk Extreme 64GB, Delkin Devices HYPERSPEED 128GB, and Samsung EVO Plus 64GB.

These models performed well across multiple metrics and came close to advertised specs.

Cole’s blog includes charts and summaries to help buyers quickly find reliable options and it’s frankly a stunning piece of work. He’s not done yet either. Testing continues unabated, with more cards in queue, hopefully including some of the largest capacity models.

microSD card test overall scores

(Image credit: Matt Cole)

More from TechRadar Pro



from Latest from TechRadar US in News,opinion https://ift.tt/GEr9JPO

Thursday, July 3, 2025

Our Group Text Is Sending the Top July 4th and Prime Day Deals Directly to Your Phone

With Fourth of July sales happening now and Prime Day less than a week away, our CNET shopping experts can help you get the best discounts with the least amount of effort.

from CNET https://ift.tt/pu8cVyK

Latest Tech News

  • OpenAI adds Google TPUs to reduce dependence on Nvidia GPUs
  • TPU adoption highlights OpenAI’s push to diversify compute options
  • Google Cloud wins OpenAI as customer despite competitive dynamics

OpenAI has reportedly begun using Google’s tensor processing units (TPUs) to power ChatGPT and other products.

A report from Reuters, which cites a source familiar with the move, notes this is OpenAI’s first major shift away from Nvidia hardware, which has so far formed the backbone of OpenAI’s compute stack.

Google is leasing TPUs through its cloud platform, adding OpenAI to a growing list of external customers which includes Apple, Anthropic, and Safe Superintelligence.

Not abandoning Nvidia

While the chips being rented are not Google’s most advanced TPU models, the agreement reflects OpenAI’s efforts to lower inference costs and diversify beyond both Nvidia and Microsoft Azure.

The decision comes as inference workloads grow alongside ChatGPT usage, now serving over 100 million active users daily.

That demand represents a substantial share of OpenAI’s estimated $40 billion annual compute budget.

Google's v6e “Trillium” TPUs are built for steady-state inference and offer high throughput with lower operational costs compared to top-end GPUs.

Although Google declined to comment and OpenAI did not immediately respond to Reuters, the arrangement suggests a deepening of infrastructure options.

OpenAI continues to rely on Microsoft-backed Azure for most of its deployment (Microsoft is the company’s biggest investor by some way), but supply issues and pricing pressures around GPUs have exposed the risks of depending on a single vendor.

Bringing Google into the mix not only improves OpenAI’s ability to scale compute, it also aligns with a broader industry trend toward mixing hardware sources for flexibility and pricing leverage.

There’s no suggestion that OpenAI is considering abandoning Nvidia altogether, but incorporating Google's TPUs adds more control over cost and availability.

The extent to which OpenAI can integrate this hardware into its stack remains to be seen, especially given the software ecosystem's long-standing reliance on CUDA and Nvidia tooling.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/ypCgh52

Wednesday, July 2, 2025

I'm Hosting a 4th of July Cookout and This Steak Hack Is My Favorite Go-To on the Grill

One small change made delivering the perfect steak easy.

from CNET https://ift.tt/mr7f6Qg

Latest Tech News

  • Huawei has filed for patents for a sulfide-based, all-solid-state battery
  • The company theorizes it could unlock up to 3,000km (1,864 miles) of range
  • Ultra-fast charging could top the battery up in under five minutes

Huawei is the latest in a growing list of automakers and tech companies that are exploring the possible benefits of fitting an EV with solid-state batteries, with the likes of BMW, Mercedes-Benz, VW, BYD and Stellantis all publicly touting the tech.

Car News China reports that the tech giant has filed a patent that outlines a solid-state battery architecture with energy densities between 400 and 500 Wh/kg, which is two or three times that of the current EV battery landscape.

Currently, Huawei doesn't manufacture its own branded vehicles in China, but instead works with various automakers to apply some of its existing technologies to vehicles.

According to the patent application, its batteries use a method that ‘dopes’ sulfide electrolytes with nitrogen to address side reactions at the lithium interface. However, it is keeping the remainder of its technology close to its chest, as the race to mass-produce solid-state battery technology safely and at scale is well and truly on.

What’s more, the company theorizes that it is able to eke some 1,864 miles of range from its battery technology, as well as complete the industry standard 10-80% charge in less than five minutes.

However, some industry experts are skeptical of those bold claims, pointing out that it is a leap of more than three times the current range abilities of the most impressive electric vehicles on sale today.

Speaking to Electrek, Yang Min-ho, professor of energy engineering at Dankook University, said that such performance "might be possible in lab conditions" but went on to explain that reproducing the results in the real world, where energy loss and thermal management play a key role, would be "extremely difficult".

The professor was also quick to point out that the nitrogen doping method is a "standard technique" that, again, can be applied in a laboratory environment but is currently difficult to scale to a point where it can be mass produced to meet the demands of global automakers.

Analysis: big headlines, small steps

Porsche Battery Lab Weissach

(Image credit: Porsche)

Understandably, China is basking in its EV dominance at the moment and it isn’t afraid to publicize innovations that have the potential to change the game.

MegaWatt charging is one of the more recent topics, but solid-state batteries have also been bubbling sway under the surface for some time. Undoubtedly, China will be the first to this technology, but it likely won’t be as soon as many domestic companies make out, nor as impressive.

What’s more, the 1,800-mile figures seem largely pointless, as it would require a huge battery pack that is going to add excess weight and blunt driving dynamics in a vain attempt to dispel notions of range anxiety.

Should Huawei be able to nail energy densities between 400 and 500 Wh/kg, it would be far better placed producing smaller packs that can still offer an impressive range without the need for enormous, expensive batteries.

When an EV can easily cover 600 miles on a single charge, range anxiety largely becomes obsolete, as there are so few drivers that want to sit for hours on end without a break. Plus, with the public charging network expanding and improving year-on-year, it is now arguably easier than ever to find a spot to plug in and stretch the legs.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/YZ3FbmG

Latest Tech News

  • Anthropic's MCP Inspector project carried a flaw that allowed miscreants to steal sensitive data, drop malware
  • To abuse it, hackers need to chain it with a decades-old browser bug
  • The flaw was fixed in mid-June 2025, but users should still be on their guard

The Anthropic Model Context Protocol (MCP) Inspector project carried a critical-severity vulnerability which could have allowed threat actors to mount remote code execution (RCE) attacks against host devices, experts have warned.

Best known for its Claude conversational AI model, Anthropic developed MCP, an open source standard that facilitates secure, two-way communication between AI systems and external data sources. It also built Inspector, a separate open source tool that allows developers to test and debug MCP servers.

Now, it was reported that a flaw in Inspector could have been used to steal sensitive data, drop malware, and move laterally across target networks.

Get 55% off Incogni's Data Removal service with code TECHRADAR

Wipe your personal data off the internet with the Incogni data removal service. Stop identity thieves
and protect your privacy from unwanted spam and scam calls.View Deal

Patching the flaw

Apparently, this is the first critical-level vulnerability in Anthropic’s MCP ecosystem, and one that opens up an entire new class of attacks.

The flaw is tracked as CVE-2025-49596, and has a severity score of 9.4/10 - critical.

"This is one of the first critical RCEs in Anthropic's MCP ecosystem, exposing a new class of browser-based attacks against AI developer tools," Avi Lumelsky from Oligo Security explained.

"With code execution on a developer's machine, attackers can steal data, install backdoors, and move laterally across networks - highlighting serious risks for AI teams, open-source projects, and enterprise adopters relying on MCP."

To abuse this flaw, attackers need to chain it with “0.0.0.0. Day”, a two-decade-old vulnerability in web browsers that enable malicious websites to breach local networks, The Hacker News explains, citing Lumelsky.

By creating a malicious website, and then sending a request to localhost services running on an MCP server, attackers could run arbitrary commands on a developer’s machine.

Anthropic was notified about the flaw in April this year, and came back with a patch on June 13, pushing the tool to version 0.14.1. Now, a session token is added to the proxy server, as well as origin validation, rendering the attacks moot.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/L4YVIdr

Tuesday, July 1, 2025

Apple Will Release At Least One More iOS 18 Update Before iOS 26

Apple will likely release iOS 18.6 this month but don't expect many new features.

from CNET https://ift.tt/iJ4mVoP

Latest Tech News

  • OWC Express 4M2 enclosure offers an alternative route to large, fast external storage
  • Thunderbolt 3 on Windows cripples performance to well below the advertised maximum speed
  • OWC Express 4M2 SSD slots are PCIe 4.0 x1 only, so individual drive speeds are limited

In a market saturated with expensive high-capacity storage, the OWC Express 4M2 enclosure offers an alternative route to large, fast external storage without immediately breaching the $3000 mark.

At $239.99 for the base configuration, this device cheaper than the TerraMaster D4 SSD and offers a flexible foundation for building what could amount to a 32TB setup when paired with four 8TB NVMe drives.

The company promotes this device as capable of up to 3200MB/s throughput, but real-world performance is highly variable.

Maximum speed requires RAID and careful system configuration

The four M.2 NVMe slots support only PCIe 4.0 x1, which limits individual drive performance to about 1600MB/s.

Reaching peak speeds, therefore, requires RAID configurations and optimal conditions, factors that introduce complexity.

It provides support for RAID 0, 1, 4, 5, and 10, but again, achieving these benefits depends on software licensing, drive quality, and user knowledge.

Users might be drawn by the enclosure’s speed, but should be aware that performance gains require effort and understanding.

Compatibility with USB4 and Thunderbolt standards across macOS and Windows gives the enclosure broad appeal, although actual speed will be gated by the host device.

For example, systems running on older Thunderbolt 3 ports under Windows are capped well below full bandwidth.

While macOS users gain extra features such as booting from RAID arrays, this is limited to systems running at least macOS 11.3.

The Express 4M2 does deliver in terms of construction and thermal management.

Its aircraft-grade aluminum chassis is paired with a dual-fan system that activates only under high thermal load, which ensures quiet operation in most scenarios.

OWC's inclusion of SoftRAID on some models introduces functionality typically reserved for more expensive storage systems.

For those trying to assemble a portable SSD setup or replace their external HDD with something faster, this enclosure makes economic sense, but only just.

However, the cost of filling all four bays with quality 8TB SSDs still pushes the total towards $3000, making this option practical only for those who can supply their own drives or already have SSDs on hand.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/2SHCtBp

Latest Tech News

Rumors that Apple might make an affordable, multi-colored MacBook based on the A18 Pro chip sparked considerable excitement and ignited a hope in me and others that it could mark the return of the 12-inch MacBook.

Unveiled a decade ago, the gold-finished 2-lb, full-sized keyboard-sporting MacBook was, for its time, an engineering marvel. Apple arguably rewrote its laptop playbook to create the system.

Shortly after launch, I spoke to the marketing head Phil Schiller and Apple's Mac and iPad lead (and now Apple's senior vice president of Hardware Engineering) John Ternus about all Apple did to make the portable wonder.

The pair spread out before me things like a multi-tiered battery and something called the "speaktenna", which was basically a combination of Wi-Fi and Bluetooth antennas and a speaker system.

"We ended up with a group of antenna engineers who know more about speakers than any other antenna engineers and a group of speaker engineers who knew more about antenna design than just about anyone else in the world," Ternus told me.

There was a passion around the product that rivaled that of the Apple Watch, which launched alongside it.

The 12-inch MacBook was also a bit of an odd duck. It was lighter than a MacBook Air, but it was not an Air. It didn't have the same number of ports. In fact, there was just one USB-C port running at USB 3.1 speeds (pokey compared to the Thunderbolt 4 ones you find on today's MacBook airs). Oh, and did I mention that the port handled charging duties, too?

Encrusted with components on both sides, the 12-inch MacBook's motherboard was small enough to fit in the palm of my hand. Everything about the MacBook's components was built in support of its enviable proportions.

At its thickest part, it was, at 0.35cm, thinner than the current MacBook Air. I love the current 13-inch MacBook Air (M4), but I'd be lying if I said I wouldn't enjoy carrying around an even lighter, thinner, and smaller cousin.

It's fair, though, to ask why the market needs such a system now.

First, let's imagine what the MacBook 12-inch 2026 might be. It would have:

  • An A18 Pro CPU
  • 13-inch LED-backlit Retina display
  • 16 GB of memory to support Apple Intelligence
  • 128 GB of on-board storage
  • 30GB of free iCloud storage
  • A fanless design
  • A MagSafe Charge port
  • A USB-C style Thunderbolt 3 port
  • Recycled aluminum enclosure
  • A full-sized magic keyboard including Touch ID
  • A 4-inch Force Touch Trackpad

Design-wise, the 12-inch MacBook A18 Pro would align closely with the MacBook Air line. No more wedge, instead two flat panels squeezed together into a 0.35cm-thick slab.

Granted that everything above is guesswork, but I believe that configuration would fit neatly into a $599 package (maybe even a $499 one).

Nothing here is new, and the A18 Pro is plenty powerful and efficient to run such a system.

The benefit, obviously, is an affordable, yet nearly full-sized portable that is a complete system. I am well aware you can buy an M4 Mac Mini for $599, but you still need to buy a mouse, keyboard, and screen. This, by contrast, would be the full Apple MacBook package at, finally, an affordable price.

You might have also noticed the rather paltry base storage. That's to help keep costs down. It's buttressed, though, by something Apple desperately needs to do: offer more versatile and forgiving iCloud storage options.

The usual 5GB of free storage is not enough, and I think the extra 30GB will offset the limited local storage, moving those who are on the fence about the 12-inch MacBook into the must-buy territory.

Give them what they want

If Apple balks at reintroducing the confusing "MacBook" name, especially when it's smaller and lighter than the MacBook Air, they could call it the MacBook Air LT (for light) or MacBook Air A (for its A-series chip).

One need only look at Walmart for evidence that consumers want such a system. It's been selling the old-school design MacBook Air M1 for $699 and now $649 for years. People are desperate for a truly affordable Mac, but they are probably tired of the growing performance compromises tied to the aging M1 chip.

The A18 Pro will sing in a tiny 12-inch laptop, and the system's incredibly small proportions will make it a hit with those who thought they might go for a lightweight iPad with a Magic Keyboard Folio, mainly because they thought it wouldn't weigh down their backpack.

Apple has learned, thanks to Apple Silicon, so much about building lightweight and performant systems that it makes sense to extend the MacBook idea in new and maybe unexpected directions.

A revived 12-inch MacBook would sell like hotcakes and pave the way for more fresh ideas, like a 12-inch MacBook Air running an M3 chip. That one could sell for $699.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/xLplZtR

Latest Tech News

A new study warns evolvable AI systems could adapt and reproduce faster than any biological species, escaping ...