Google's latest feature drop might be a big win for summer travel
As with flights, you can now track prices and set alerts for hotels
Google Maps can also now pull potential places to visit from screenshots
If you’re a fan of Google Flights, especially for the price tracking data and how the current prices you’re seeing rank against other days, you’re in for a treat. As part of a drop of features fit for upcoming summer travel, Google aims to do for hotels what it’s done for flights.
And yes, it’s as good as it sounds. Now, when you search for hotels on Google, you’ll have the option to ask the search giant to track prices. Essentially, you turn on the feature and then get an alert if there is a price drop.
Similar to flights, you can be a bit descriptive, setting a price range or a 'don’t bother me if it doesn’t fall' here. It will even factor in a star rating if you have one selected and the general area where you were searching for a hotel.
(Image credit: Google)
Google is rolling out this new hotel price tracking feature globally on desktop and mobile. Once it’s available, you’ll find it right in search, complementing the historical knowledge of hotel pricing history.
This hotel-focused feature is launching alongside some other new functionality from Google, all billed under getting ready for summer travel. The ability to set up price alerts for hotels is undoubtedly the most user-friendly feature and could have the most significant impact. It could potentially help you save on a stay.
Another new feature that could help you better prepare for a trip is screenshot support within Google Maps. If you enable it, Google Maps will look through photos and deliver a list of places you've screenshotted.
So, if you've been screenshotting TikToks about the best places to eat in New York City or maybe a list of the best ice cream spots in Boston, you won't need to dig through all of them to find every place mentioned.
Instead, with some AI help, Google Maps will look through your screenshots, find those spots, and list them well in a handy list for you. It'll live in the app in a list titled "Screenshots," and this feature is entirely optional.
(Image credit: Google)
This feature could prove helpful, but considering that screenshots aren’t just used for travel or remembering specific spots, this could also be a bit of a privacy concern.
It is opt-in only and not on by default, but it is rolling out now to mobile devices with U.S. English on iOS first, with Android following shortly.
This Dreame L10s Ultra robot vacuum and mop combo is now a massive $490 off, thanks to Amazon's Big Spring Sale. Act fast to grab it at its lowest price ever.
Look, I love Windows, I do, I really do. It's one of those things that I just can't live without at this point. I've tried MacOS, I've tried Linux, I've even dabbled in the world of Android and Chromebooks during my time, and yet, none of it compares to Windows; it just doesn't.
There's a certain amount of familiarity, of indoctrination into that Microsoft cult that's rife in me. I grew up using Windows 98, and onwards, it was what I gamed on, what I studied on, what I made lifelong friends on—you name it. 98, 2000, XP, Vista, 7, 8, 10, and finally we're here at Windows 11, at least until Microsoft inevitably tells us that its "final" operating system isn't its final operating system.
The thing is though, it really is a bag of spanners at times, and I've kinda developed this habit of going thermonuclear on my own machine at quite regular intervals over my lifetime.
Mostly by flattening and reinstalling Windows onto my PC every other month or so. Why? I'm glad you asked.
The need for an occasional refresh
Well, the thing is, although Windows gives you a lot of freedom and has broad compatibility with more programs than any other operating system out there, it does make it somewhat susceptible to bugs. Lots of them.
These can be inflicted by Microsoft directly through Windows Updates or drivers accidentally corrupting files or programs, or well, any number and manner of avenues.
The worst culprit, of course, is the classic "upgrade from the previous Windows version to this version." Just don't; it's never worth it.
Windows is great, but no operating system is designed to run perfectly forever.(Image credit: Microsoft)
See, registry files corrupt, file directories get mislabelled, and inevitably you'll end up with programs you forget about sitting in the background sucking up critical resources. It's just a bit crap like that, and ironically, although I do have a massive disdain towards macOS, I can't deny its closed-off ecosystem does avoid a lot of these pitfalls.
Whenever anyone asks me about a system bug or help with troubleshooting, my first and often instant reaction is to suggest just flattening the machine entirely and reinstalling a fresh version of Windows on top.
That's why I advocate tying a full-fat Windows license to your Microsoft account so you can easily reinstall and activate Windows 11 on your machine on a dime.
An arduous task
It does require some getting used to this salting-the-earth kind of strategy, but the benefits are just too great to ignore.
The first thing I recommend is splitting up your storage solution. In every build I've ever done, I've almost always recommended a two-storage drive system. The first and fastest of the two should be used as your main OS drive, and the second, usually slower, cheaper, and larger, being your media/games/back-up drive. Any valuable documents, assets, or big downloads live here.
What that allows you to do is keep all your games and important files on your D: drive, and then, whenever that re-install time comes a-calling, allow you to quickly flatten and re-install Windows on your C: drive.
If you've got slow internet or just can't be bothered to re-download everything, it is a huge time-saver doing it this way. You can get away with partitions, but it's far easier to accidentally delete the wrong one on your next Windows install.
Laptop, desktop; it doesn't matter, just give your hardware an OS break now and then.(Image credit: Sergey Kisselev / Behance.net / Microsoft)
It also helps really reduce program and document clutter and encourages good back-up practice too. If you know you're going to flatten a machine every 2-3 months, then the likelihood is you'll keep all of your important files and documents safely stored in the cloud, or off-site, backed up with solid authentication procedures as well.
You'll end up with a minimal desktop that's stupidly rapid, clean, up-to-date, and as error-free as Microsoft can muster. If you're building a new PC or transferring an old one to updated hardware, save yourself the hassle and just back up and move your most important files, download a fresh USB Windows Installer, and get cracking. I promise you it's worth it.
A new lease on (virtual) life
With that, and good internet education and practice, plus a solid VPN, you can then dump aftermarket antivirus as well and rely on good ol' Windows Defender. It's one of the best antivirus programs out there, and lacks the resource vampirism many third-party solutions have.
Worst-case scenario, you get tricked into opening a dodgy email or land on an odd website, and your machine gets whacked with some crypto-scam; just flatten it. Job done. Although again, I'd highly recommend just being a bit more internet savvy first.
The only thing I'd say if you do go this route, be careful on the device you do it on and prep accordingly. Some motherboards won't support ethernet or wireless connectivity without drivers too.
Grab your USB stick, get the Windows Installer setup on it, and then stick a folder in it called DRIVERS. Head to your motherboard's product page, grab the relevant drivers, then once you're finally on the desktop, you should be able to install all your chipsets and drivers and get that internet connectivity back, no sweat.
If you do get stuck on the "need to connect to the internet" Windows 11 install page, hit Shift + F10, click the command window, type OOBE\BYPASSNRO, and hit enter. The installer will reboot, and you'll now have the option to tell Microsoft you "don't have the internet" and continue with the installation regardless.
So yeah, PSA complete. I got 99 problems, and most of them are Microsoft-related. At least for about 20 minutes anyway.
from Latest from TechRadar US in News,opinion https://ift.tt/Bd8OuqN
Rubin Ultra GPUs previewed at Nvidia GTC 2025 with Kyber rack mockups
Each NVL576 rack may include 576 GPUs across four internal pods
Projected power draw reaches 600kW with performance targets of 15 EFLOPS
At Nvidia GTC 2025, the company gave a preview of what its future data center hardware could look like, showcasing mockups of its Rubin Ultra GPUs housed in the Kyber-based NVL576 racks.
These systems are expected to launch in the second half of 2027, and while that’s still some way off, Nvidia is already laying the groundwork for what it describes as the next phase of AI infrastructure.
A single NVL576 rack, according to Jensen Huang, co-founder, president, and CEO of Nvidia, could draw up to 600kW. That's five times more than the 120kW used by current Blackwell B200 racks, suggesting a steep rise in power per rack going forward.
Powering the future
Tom’s Hardware reports, "Each Rubin Ultra rack will consist of four 'pods,' each of which will deliver more computational power than an entire Rubin NVL144 rack. Each pod will house 18 blades, and each blade will support up to eight Rubin Ultra GPUs - along with two Vera CPUs, presumably, though that wasn't explicitly stated. That's 176 GPUs per pod, and 576 per rack."
The Kyber rack infrastructure will support these systems, along with upgraded NVLink modules which will have three next-generation NVLink connections each, compared to just two found in existing 1U rack-mount units.
The first Rubin NVL144 systems, launching in 2026, will rely on existing Grace Blackwell infrastructure. Rubin Ultra arrives in 2027 with far more density.
Tom’s Hardware says that the NVL576 racks are planned to deliver “up to 15 EFLOPS of FP4” in 2027, compared to 3.6 EFLOPS from next year's NVL144 racks.
During the GTC 2025 keynote, Jensen Huang said future racks could eventually require full megawatts of power, meaning 600kW may only be a stepping stone.
As power climbs toward the megawatt range, questions are inevitably growing about how future data centers will be powered.
Nuclear energy is one obvious answer - The likes of Amazon, Meta, and Google are part of a consortium that has pledged to triple nuclear output by 2050 (Microsoft and Oracle are notably missing for the moment) and mobile micro nuclear plants are expected to arrive in the 2030s.
Microsoft pulled out of a $12bn deal with CoreWeave, citing delays
OpenAI took over the contract, backed by Microsoft’s own investment funds
AI sector remains a closed loop driven by a few dominant players
CoreWeave is eyeing a huge (potentially $2.5 billion) IPO in the coming weeks, but it has also had a few unflattering news stories to contend with recently.
Jeffrey Emanuel, whose viral essay described Nvidia as overpriced and led to it losing $600 billion in a single day, has described CoreWeave as a turkey and called it the “WeWork of AI”.
More recently, Microsoft chose to walk away from a nearly $12 billion option to buy more data-center capacity from the AI hyperscaler.
OpenAI to the rescue
The Financial Times (FT) reported sources familiar with the matter saying Microsoft had withdrawn from some of its agreements “over delivery issues and missed deadlines” which shook the tech giant’s confidence in CoreWeave.
The FT added that despite this, Microsoft still had "a number of ongoing contracts with CoreWeave and it remained an important partner.”
Microsoft is CoreWeave’s biggest customer, and the AI hyperscaler refuted the FT's story, saying “All of our contractual relationships continue as planned – nothing has been cancelled, and no one has walked away from their commitments.”
Shortly after that news broke, it was reported that OpenAI would be taking up Microsoft's nearly $12 billion option instead, helping CoreWeave avoid a potentially embarrassing setback so near to its closely watched IPO.
Rohan Goswami at Semafor made a couple of interesting observations on the news, noting, “This isn’t a sign that Microsoft is pulling back on AI - “We’re good for our $80 billion,” Satya Nadella said on CNBC - but an indication that the company is being more tactical about exactly when and where it spends. At the same time, OpenAI’s biggest backer is Microsoft, meaning that OpenAI is paying CoreWeave with money that is largely Microsoft’s to begin with.”
He described this as the rub, saying, “The AI economy is currently a closed loop and will stay that way until a broader swath of economic actors like big and medium-sized companies start spending real dollars on AI software and services. Until then, nearly all the money is coming from a few companies - chiefly Nvidia and Microsoft - which themselves depend on the goodwill of their public shareholders to keep underwriting it all.”
Ascent GX10 is Asus's take on Nvidia's DGX Spark AI supercomputer
ServeTheHome spotted the product at GTC 2025 and went hands on
The site took photos and noted the AI computer is lighter and cheaper
Nvidia has recently been showing off DGX Spark, its Mac Mini-sized AI supercomputer built around the GB10 Grace Blackwell Superchip.
Originally called Project Digits, the device has been created to bring advanced model development and inferencing directly to desktops. Although it looks like a mini PC, it’s incredibly powerful and designed to handle demanding AI workflows such as fine-tuning, inference, and prototyping without relying entirely on external infrastructure.
Aimed at developers, researchers, data scientists, and students working with increasingly complex AI models locally, it comes with 128GB of LPDDR5x unified memory and up to 4TB of NVMe SSD storage. The DGX Spark isn’t cheap at $3999, but if you’re looking to save some money without cutting corners, there are some alternatives.
Shown off at Nvidia GTC 2025, (as, naturally, was Nvidia’s own DGX Spark), the Ascent GX10 comes with 128GB of unified memory, and the Blackwell GPU with fifth-generation Tensor Cores and FP4 precision support. While DGX Spark has 4TB of storage, Asus’s version only has 1TB.
ServeTheHome was at the conference and spotted the Ascent GX10 on Asus’s stand where it snapped a few photos of the product.
The site also noted, “The front of the system has the ASUS logo and a power button. This may sound strange, but ASUS using plastic on the outside of the chassis in parts versus Nvidia using more metal is an interesting trade-off. Nvidia DGX Spark feels in hand much more like the Apple Mac Studio from a density perspective while the Asus felt lighter. If you truly want this to be a portable AI box, then ASUS may have a leg up, especially if you want to cluster it.“
On the rear of the system, STH says there’s an HDMI port, four high-speed USB4 40Gbps ports, a 10GbE NIC for base networking, and a dual-port Nvidia ConnectX-7, which Nvidia described as an Ethernet version of the CX7 designed for RDMA clustering.
STH’s Patrick Kennedy noted, “For some context here, a Nvidia ConnectX-7 NIC these days often sells for $1500–2200 in single unit quantities, depending on the features and supply of the parts. At $2999 for a system with this built-in that is awesome. Our sense is that folks are going to quickly figure out how to cluster these beyond the 2-unit cluster that Nvidia is going to support at first.”
There are reportedly no current plans for another iPhone mini
The last 'mini' model launched in September 2021
1 in 5 TechRadar readers say sub-6 inches is the best phone size
The last 'mini' phone we saw from Apple was the 5.4-inch iPhone 13 mini, which launched in September 2021, and was replaced by the 6.7-inch iPhone 14 Plus – and it seems unlikely that Apple is going to bring back a smaller iPhone model any time soon.
Bloomberg reporter Mark Gurman, who is usually a reliable source when it comes to Apple, said in a live Q & A (via MacRumors) that Apple has "really shifted away" from smaller form factors and that its engineers "are not working on a smaller iPhone right now".
The current iPhone line-up comprises the 6.1-inch iPhone 16, the 6.7-inch iPhone 16 Plus, the 6.3-inch iPhone 16 Pro, and the 6.9-inch iPhone 16 Pro Max – so if you want anything smaller than 6.1 inches in terms of display size, you're out of luck.
Gurman did say Apple might one day reconsider their position if market pressures change, but don't expect anything for the foreseeable future. This year, the iPhone 16 Plus is predicted to be replaced by the iPhone 17 Air, possibly with the same screen size.
Should Apple reconsider?
The Galaxy Z Flip 6 at least folds down to a small size(Image credit: Samsung)
Apple never said anything officially, but market reports suggested the iPhone 13 mini wasn't a great seller – which most likely sealed its fate. But according to many TechRadar readers, the iPhone 13 mini was the perfect size for a smartphone.
We ran a poll on the TechRadar WhatsApp channel asking you what your favorite phone screen size was. Top of the pile, with 241 votes out of 799 (31%), was the largest size besides foldables: phones 6.9 inches or bigger, such as the iPhone 16 Pro Max.
In second place, however, were phones with screens under 6 inches in size – like, for example, the iPhone 13 mini. This size got 171 votes (21%), but unfortunately for small phone fans, it's getting harder and harder to find more compact handsets.
Clearly, not enough of the people who like smaller-sized handsets went out and bought an iPhone 13 mini, and Apple has taken note. If you fall into that category, you could try a flip foldable like the Samsung Galaxy Z Flip 6 instead.
Thanks, Sam Altman, for giving us access to ChatGPT's new integrated image-generation skills. They're, as Steve Jobs might've described them, insanely good.
So good, in fact, that I'm worried now about my little corner of the universe where we try to discern the accuracy of renders, models, and pre-production leaks that might tell us the tale of future Apple products, like the rumored iPhone 17 Air.
For those who don't know, the iPhone 17 Air (or Slim) is the oft-talked-about but never-confirmed ultra-slim iPhone 16 Plus/iPhone 16e/SE hybrid that could be the most exciting iPhone update when Apple likely unveils a whole new iPhone 17 line in September.
While it may not be the most powerful iPhone, it should be the biggest and thinnest of the bunch. Even the single rear camera might not be enough to keep potential buyers away.
Imagining what it could look like, well, that's my job. Or it was until I started working with ChatGPT running the recently updated 4o model, which is capable of generating images out of thin air or based on photos and images you upload into it.
It's a slightly methodical model, taking up to 45 seconds to generate an image that flows in slowly, almost one microscopic, horizontal line of pixels at a time. The results are something to behold.
It's not just the quality but how ChatGPT can maintain the thread and cohesion of images from prompt to prompt. Usually, if you start with image generation in something like OpenAI's Dall-E or, say, X's Grok, it'll do a good job with the first image.
However, when you request changes, elements of the original disappear or end up altered. It's even harder to create a series of images that appear to be part of the same story or theme. There are usually too many differences.
ChatGPT 4o image generation appears different and, possibly, more capable.
ChatGPT 4o did a nice anime glow up with my picture of a hawk (left).(Image credit: Chat GPT-generated images along with source material)
Having already experimented a bit with the model shortly after Altman and other OpenAI engineers announced it, I quickly found that ChatGPT 4o did its best work when you started with a solid source.
I initially had fun turning images of myself and even photos I took this week of a peregrine hawk into anime. However, I was curious about ChatGPT's photo-realism capabilities, especially as they relate to my work.
Apple announced this week that WWDC 2025's keynote would fall on June 9. It's an event where the tech giant outlines platform updates (iOS, iPadOS, macOS, etc) that inform much of how we think about Apple's upcoming product lineup. With information like this, we can start to map out the future of the anticipated iPhone 17 line. Visualizing what that will look like can be tough, though. So, I decided to let ChatCPT's newest image model show me the way.
A real photo of an iPhone SE on the left and a ChatGPT 4o-generated one on the right.(Image credit: Chat GPT-generated images along with source material)
Since the iPhone 17 Air would conceivably be the newest member of the iPhone family (shoving aside the less-than exciting iPhone 16e), I decided to focus on that.
Initially, I handed ChatGPT an older iPhone SE review image with this prompt:
"Use this photo to imagine what an Apple iPhone 17 Air might look like. Please make it photo-realistic and a nice, bright color."
ChatGPT did a good job of maintaining the settings from the original photo and most of my hand, though I think I lost a finger. It did well updating the finish and even added a second camera, making it part of a raised camera bump.
I followed with this prompt:
"This is good. Since the iPhone 17 Air is supposed to be super-thin, can you show it from the side?"
ChatGPT lost the background and made the image look like an ad for the iPhone 17 Air. It was a nice touch, but the phone didn't look thin enough. I prompted ChatGPT to make it thinner, which it did.
This was progress, but I quickly realized my error. I hadn't based the prompt on available iPhone 17 Air rumors, and maybe I wasn't being prescriptive enough in my prompts.
(Image credit: Chat GPT-generated images along with source material)
Since the iPhone SE is now a fully retired design, I decided to start over with a review image of the iPhone 16 Pro and initially used the same prompt, which delivered an iPhone 16 Pro in a lovely shade of blue.
This time, when I asked to see the thin side of the phone. I told ChatGPT, "Don't change the background."
I was pleased to see that ChatGPT more or less kept my backyard bushes intact and seamlessly inserted the new phone in something that now sort of looked like a more attractive version of my hand.
My original iPhone 16 Pro review image is on the left. ChatGPT 4o's work in on the right.(Image credit: Chat GPT-generated images along with source material)
Some iPhone 17 Air rumors claim the phone might have just one camera, so I told ChatGPT to remove two cameras and rerender.
In previous prompts, I'd told ChatGPT to "make it thinner," but what if I gave the chatbot an exact measurement?
"Now show me the side of the iPhone 17 Air. It should be 5.4mm thick and the same color."
(Image credit: Chat GPT-generated images along with source material)
This was almost perfect. I did notice, though, that there was no discernable camera bump, which seems unlikely in a 5.4mm-thick iPhone. Even the anticipated ultra-thin Samsung Galaxy S25 Edge features a camera bump. There is no way the iPhone 17 Air will get away without one.
Finally, I asked for a render of the screen:
"Now show me the iPhone 17 Air screen. Make sure it shows the Dynamic Island. The screen should be bright and look like an iPhone home screen with apps and widgets."
Once again, ChatGPT did an excellent job, except for an "iOS IAir" label just above the dock. The rest of the App Icon labels are perfect, which is impressive when you consider the difficulty most image generation models have with text.
ChatGPT doesn't produce images with AI watermarks; only the file names tell you these are ChatGPT images. That's concerning, as is the exceptional quality.
I expect the internet will soon be flooded with ChatGPT iPhone and other consumer electronics hardware renders. We won't know what's a leak, what's a hand-made render, or what's direct from the mind of ChatGPT based on prompts from one enterprising tech editor.
Apple might want to put a camera or two on your next Apple Watch, ostensibly to assist its AI in interpreting your environment and, perhaps, acting on your behalf: "There's a hill up ahead! You might want to accelerate your running pace, but watch out for that puddle; it might be a pothole!"
That sounds useful, but do we need a smartwatch to do a job best left to our eyes? You'll see that hill, you'll take note of the puddle, and subconsciously plan a route around it. Why would you need a camera on your wrist?
Forgive me if I am a bit against the whole concept of a wearable camera. I think that unless you're a police officer who has to record all their interactions with the public (see The Rookie for details), a chest-bound camera is a bad idea. I think most Humane AI Pin wearers (and Humane AI) quickly discovered this.
Cameras on glasses aren't as bad, perhaps because they're so close to your eyes where you are looking at and making mental notes about what you see anyway. There are privacy concerns though, and when I've worn Ray-Ban Meta Smart Glasses, I've had a few people ask if I'm recording them. There's a little light on the frame that tells them as much, but I get the concern. No one wants to be recorded or have their picture taken without their explicit permission.
Never a good idea
We've seen cameras on smartwatches before. Back in 2013, Samsung unveiled the beefy Samsung Galaxy Gear, which I wore and reviewed. Samsung's idea for an on-wrist camera was, shall I say, unusual.
Instead of integrating the camera into the smart watch's body, Samsung stuffed it into the wristband. This was one bad idea on top of another. By placing the camera on the wristband, it forced you to position your wrist just right to capture a photo, using the smartwatch display as a viewfinder. Moreover, there was concern about damaging the wristband, which could lead to ruining the 2MP camera. It took, by the way, just passable photos.
Apple's apparent idea for a smartwatch camera is less about capturing a decent photo and more about ambient awareness. Information that one or more cameras can glean about your environment could inform Apple Intelligence – assuming Apple Intelligence is, by then, what Apple's been promising all along.
Powerful AI works best with data, both training to build the models and real time for analysis by those same models. Our best iPhones and best smartwatches are full of sensors that tell these devices where they are, where they're going, how fast they're moving, and if you've taken a fall or been in a car crash while carrying or wearing them. The watch has no camera, and your phone does not use its camera to build a data picture unless you ask it to.
Currently, you can squeeze your Camera Control button on the iPhone 16 and enable Visual Intelligence. This lets you take a picture and ask ChatGPT or Google Search to analyze it.
An eye on your wrist
A camera on your smartwatch, though, might always be on and trying, even as you pump your arms during a brisk run, to tell you about what's around and in front of you.
It might be looking at the people running toward you, and could possibly identify people on the fly, assuming it can get a clear enough shot. The watch could then connect to your phone or AirPods and identify people: "That's Bob Smith. According to his LinkedIn, he works in real estate." I'm not sure how those other people would feel about that, though.
I get that some of this sounds very cool and futuristic, but are we really meant to know that much about everything around us? Wouldn't it be better to explore what we want to with our eyes and ignore the rest? Exactly how much information can a human take?
It needs this but...
There are no guarantees that this will happen. It's just a rumor from Bloomberg News, but it makes sense.
It's high time for Apple to do the first truly significant Apple Watch redesign in a decade. Apple also needs some exciting new technology to remind people it can still innovate. Plus, more hardware sensors open the door to more powerful Apple Intelligence, and with all the recent missteps in that space, Apple is in dire need of an AI win.
I'm fine with all of that, as long as it does not involve putting cameras on my Apple Watch.
Nvidia’s DGX Station is powered by the GB300 Grace Blackwell Ultra
OEMs are making their own versions – Dell’s is the Pro Max with GB300
HP’s upcoming GB300 workstation will be the ZGX Fury AI Station G1n
Nvidia has unveiled two DGX personal AI supercomputers powered by its Grace Blackwell platform.
The first of these is DGX Spark (previously called Project Digits), a compact AI supercomputer that runs on Nvidia’s GB10 Grace Blackwell Superchip.
The second is DGX Station, a supercomputer-class workstation that resembles a traditional tower and is built with the Nvidia GB300 Grace Blackwell Ultra Desktop Superchip.
Dell and HP reveal their versions
The GB300 features the latest-generation Tensor Cores and FP4 precision, and the DGX Station includes 784GB of coherent memory space for large-scale training and inferencing workloads, connected to a Grace CPU via NVLink-C2C.
The DGX Station also features the ConnectX-8 SuperNIC, designed to supercharge hyperscale AI computing workloads.
Nvidia’s OEM partners - Asus, HP, and Dell - are producing DGX Spark rivals powered by the same GB10 Superchip. HP and Dell are also preparing competitors to the DGX Station using the GB300.
Dell has shared new details about its upcoming AI workstation, the Pro Max with GB300 (its DGX Spark version is called Pro Max with GB10).
The specs for its supercomputer-class workstation include 784GB of unified memory, up to 288GB of HBM3e GPU memory, and 496GB of LPDDR5X memory for the CPU.
The system delivers up to 20,000 TOPS of FP4 compute performance, making it well suited for training and inferencing LLMs with hundreds of billions of parameters.
HP’s version of the DGX Station is called the ZGX Fury AI Station G1n. Z by HP is now one of the company’s product lines, and the “n” at the end of the name signifies that it’s powered by an Nvidia processor - in this case, the GB300.
HP says the ZGX Fury AI Station G1n “provides everything needed for AI teams to build, optimize, and scale models while maintaining security and flexibility,” noting that it will integrate into HP’s broader AI Station ecosystem, alongside the previously announced ZGX Nano AI Station G1n (its DGX Spark alternative).
HP is also expanding its AI software tools and support offerings, providing resources designed to streamline workflow productivity and enhance local model development.
Pricing for the DGX Station and the Dell and HP workstations isn’t known yet, but they obviously aren’t going to be cheap. Pricing for the tiny DGX Spark starts at $3,999, and the larger machines will cost significantly more.
Asus' new Ascent GX10 brings AI supercomputing power directly to developers
Promises 1000 TOPS of AI processing and can handle models up to 200 billion parameters
It's cheaper than Nvidia DGX Spark, with less storage but similar performance
AI development is getting ever more demanding, and Asus wants to bring high-performance computing straight to the desks of developers, researchers, and data scientists with the Ascent GX10, a compact AI supercomputer powered by Nvidia’s Grace Blackwell GB10 Superchip.
Asus’s rival to Nvidia’s DGX Spark (previously Project Digits) is designed to handle local AI workloads, making it easier to prototype, fine-tune, and run impressively large models without relying entirely on cloud or data center resources.
The Ascent GX10 comes with 128GB of unified memory, and the Blackwell GPU with fifth-generation Tensor Cores and FP4 precision support means it can deliver up to 1000 TOPS of AI processing power. It also includes a 20-core Grace Arm CPU, which speeds up data processing and orchestration for AI inferencing and model tuning. Asus says it will allow developers to work with AI models of up to 200 billion parameters without running into major bottlenecks.
Powerful yet compact
“AI is transforming every industry, and the Asus Ascent GX10 is designed to bring this transformative power to every developer’s fingertips,” said KuoWei Chao, General Manager of Asus IoT and NUC Business Group.
“By integrating the Nvidia Grace Blackwell Superchip, we are providing a powerful yet compact tool that enables developers, data scientists, and AI researchers to innovate and push the boundaries of AI right from their desks.”
Asus has built the GX10 with NVLink-C2C, which provides more than five times the bandwidth of PCIe 5.0, allowing the CPU and GPU to share memory efficiently, improving performance across AI workloads.
The system also comes with an integrated ConnectX network interface, so two GX10 units can be linked together to handle even larger models, such as Llama 3.1 with 405 billion parameters.
Asus says the Ascent GX10 will be available for pre-order in Q2 2025. Pricing details have not yet been confirmed by Asus, but Nvidia says it will cost $2999 and come with 1TB of storage.
In comparison, Nvidia’s own DGX Spark is a thousand dollars more ($3999) and comes with 4TB of storage.
Your skin is an important organ and you need to keep it protected from the harsh UV rays of the sun if you don't want wrinkles. Here are the best sunscreens, picked by our experts.
HP ZBook Fury G1i is a powerful 18-inch mobile workstation
It's powered by up to an Intel Core Ultra 9 285HX and next-gen Nvidia RTX graphics
There's also a 16-inch model available with same high-end specs and features
It’s a personal preference, but I’ve always liked laptops with bigger screens. That means 16-inches for me, but HP thinks 18-inch laptops are what professionals should be aiming for if they are looking to replace their desktop PCs and get a solid productivity boost.
Billed as the world’s most powerful 18-inch mobile workstation, the HP ZBook Fury G1i 18” still manages to fit into a 17-inch backpack.
That extra 2-inches gives you roughly 30% more space to work with, which can come in handy when handling complex datasets, editing high-resolution media, or working across multiple windows.
Three-fan cooling
HP is pitching the laptop at developers and data scientists who need to train and run LLMs directly on the machine.
The Fury G1i 18” runs on Intel’s latest Core Ultra processors, up to the top-end Core Ultra 9 285HX, with peak speeds of 5.5GHz. These chips also include an NPU with up to 13 TOPS of AI performance. HP says the machine will support next-gen Nvidia RTX GPUs.
There’s support for up to 192GB of DDR5 memory and up to 16TB of PCIe Gen5 NVMe storage. Connectivity includes Thunderbolt 5, HDMI 2.1, USB-A ports, an SD card slot, and Ethernet.
The 18-inch display has a WQXGA (2560x1600) resolution, coupled with a fast 165Hz refresh rate, trading pixel density for smoother motion. Thermal performance is handled by a redesigned three-fan cooling system, along with HP’s Vaporforce tech, allowing up to 200W TDP without throttling under sustained load.
Other features include a spill-resistant RGB-backlit keyboard, four Poly Studio speakers, dual-array microphones, and an optional IR camera for facial login.
The Fury G1i is also available in a 16-inch model for anyone who feels 18-inches is too big to lug around. Pricing and availability details for both models is expected shortly.
AMD targets Nvidia’s Blackwell with upcoming Instinct MI355X accelerator
Oracle plans massive 30,000-unit MI355X cluster for high-performance AI workloads
That’s in addition to Stargate, Oracle’s 64,000-GPU Nvidia GB200 cluster
While AI darling Nvidia continues to dominate the AI accelerator market, with a share of over 90%, its closest rival, AMD, is hoping to challenge the Blackwell lineup with its new Instinct MI355X series of GPUs.
The MI355X, now expected to arrive by mid-2025, is manufactured on TSMC’s 3nm node and built on AMD's new CDNA 4 architecture. It will feature 288GB of HBM3E memory, bandwidth of up to 8TB/sec, and support for FP6 and FP4 low-precision computing, positioning it as a strong rival to Nvidia’s Blackwell B100 and B200.
In 2024, we reported on a number of big wins for AMD, which included shipping thousands of its MI300X AI accelerators to Vultr, a leading privately-held cloud computing platform, and to Oracle. Now, the latter has announced plans to build a cluster of 30,000 MI355X AI accelerators.
Stargate
This latest news was revealed during Oracle’s recent Q2 2025 earnings call, where Larry Ellison, Chairman and Chief Technology Officer, told investors, “In Q3, we signed a multi-billion dollar contract with AMD to build a cluster of 30,000 of their latest MI355X GPUs.”
Although he didn’t go into further detail beyond that, Ellison did talk about Project Stargate, saying, “We are in the process of building a gigantic 64,000 GPU liquid-cooled Nvidia GB200 cluster for AI training.”
He later added, “Stargate looks to be the biggest AI training project out there, and we expect that will allow us to grow our RPO even higher in the coming quarters. And we do expect our first large Stargate contract fairly soon.”
When questioned further about Stargate by a Deutsche Bank analyst, Ellison gave a reply that could just as easily apply to the cluster of MI355X AI accelerators Oracle is planning to build.
"The capability we have is to build these huge AI clusters with technology that actually runs faster and more economically than our competitors. So it really is a technology advantage we have over them. If you run faster and you pay by the hour, you cost less. So that technology advantage translates to an economic advantage which allows us to win a lot of these huge deals,” he said.
Ellison also touched on Oracle’s data center strategy, saying, “So, we can start our data centers smaller than our competitors and then we grow based on demand. Building these data centers is expensive, and they’re really expensive if they’re not full or at least half full. So we tend to start small and then add capacity as demand arises.”
Kioxia launches 122.88TB SSD with PCIe Gen5 and dual-port support
The LC9 Series NVMe SSD is designed for AI workloads and hyperscale storage
The new drive comes in a compact 2.5-inch form factor
After nearly seven years at the top, Nimbus Data’s massive Exadrive 100TB 2.5-inch SSD has been dethroned by Kioxia, which has unveiled a new 122.88TB model that not only offers a higher storage capacity but also supports PCIe Gen5, a first for this category.
Several companies have previously announced 120TB-class SSDs, including Solidigm, but Kioxia's LC9 Series 122.88TB NVMe SSD stands out by pairing its ultra-high capacity with a compact 2.5-inch form factor and a next-gen interface with dual-port capability for fault tolerance or connectivity to multiple compute systems.
"AI workloads are stretching the capabilities of data storage, asking for larger capacities and swifter access to the extensive datasets found in today's data lakes, and Kioxia is ready to offer the necessary advanced technologies including 2 Tb QLC BiCS FLASH generation 8 of 3D flash memory, CBA and the complimenting AiSAQ," said Axel Störmann, VP & Chief Technology Officer for SSD and Embedded Memory products at Kioxia Europe GmbH.
Supporting AI system developers' needs
The 122.88TB SSD is aimed at hyperscale storage systems, AI workloads, and other data-intensive applications that rely on capacity and speed. There’s no word on availability or pricing yet, but the company does plan to showcase the new drive at "various upcoming conferences".
"This new LC9 Series NVMe SSD is an instrumental Kioxia product expansion that will support AI system developers' needs for high-capacity storage, high performance, and energy efficiency for applications such as AI model training, inference, and Retrieval-Augmented Generation on a vaster scale," Störmann said.
Reporting on the new SSD, ServeTheHome notes, “This is a hot segment of the market, and it is great to see Kioxia joining. As AI clusters get larger, the shared storage tier is usually measured in Exabytes. Teams have found that replacing hard drives with SSDs often reduces power, footprint, and TCO compared to running hybrid arrays. Moving from lower-capacity drives to the 122.88TB capacity in a 2.5-inch drive form factor really highlights the advantage of flash in these systems.”
The EU is officially out of control. It's now demanding that Apple break down the competitive advantage it's built with attractive features like AirPlay and AirDrop and essentially open them up to the competition. Thereby stripping Apple – bit by bit – of its competitive advantage.
Ever since the EU first implemented its Digital Markets Act, it's treated Apple like a global monopoly or rather a disrespectful child that deserves to spend time in a corner.
I know many cheer these changes. Why should Apple force people to use its App Store or its now retired lightning cable?
Apple has complied but also warned about the dangers of such compliance. When the EU forced sideloading, Apple promised, "the risks will increase." If we haven't seen that happen, it may be because the majority of iPhone owners are still using the trusted and well-regarded App Store.
I consider this a change no one, save the EU and some software companies that pressed the issue, wanted.
In the case of USB-C, I've long believed Apple was heading in that direction anyway but the threat of fines forced Apple's hand and made it accelerate its plans.
Open sesame
Now, though, we have the EU demanding that Apple open up nine core iOS features, including push notifications for non-Apple smartwatches, seamless pairing between non-Apple headphones and Apple devices, and AirPlay and AirDrop. In the last instance, the EU is demanding Apple open iOS up to third-party solutions and ensure they work as well as native software.
Naturally, Apple is not happy and shared this comment with TechRadar:
"Today’s decisions wrap us in red tape, slowing down Apple’s ability to innovate for users in Europe and forcing us to give away our new features for free to companies who don’t have to play by the same rules. It’s bad for our products and for our European users. We will continue to work with the European Commission to help them understand our concerns on behalf of our users."
As I'm sure you can gather from the tone, Apple is fed up. This constant stream of EU enforcements, all designed to diminish Apple and hoist up competitors, is ridiculous and increasingly unfair.
Let's zero in on AirDrop as an example.
Drop it like it's hot
(Image credit: TechRadar)
AirDrop, which lets you quickly share files, photos, and videos between iPhones and other Apple ecosystem devices, arrived more than a decade ago on iOS 7. It was a transformative and brilliant bit of programming that instantly opened up an ad-hoc network between, say, a pair of iPhones. It did require some learning. Open AirDrop settings on phones could result in you unexpectedly receiving an illicit photo (yes, it happened to me once and it was terrible). Apple has since vastly improved AirDrop controls.
Not a lot of people used it at first, but every time I went to a party where I was often taking pictures, I would grab the host and quickly drop the photos onto their phones. They were usually shocked and deeply appreciative.
There was, for years, nothing quite like it on the Android side until Samsung unveiled Quick Share and Google launched Nearby in 2020. The two later merged to become just Quick Share.
There's no doubt Apple's success with AirDrop spurred the development of Quick Share and isn't that exactly how competition is supposed to work? You don't look at one company's successful deployment of technology and then demand that they make it possible for you to deploy a copycat app, and on the successful company's platform no less.
There's no doubt Apple's success with AirDrop spurred the development of Quick Share and isn't that exactly how competition is supposed to work?
But this is what the EU is demanding of Apple. It must make it possible for competitors to compete with Apple on its own platform, and why? Because apparently, they cannot do it without the EU's help.
I actually do not think that's true. Google and Samsung, for instance, are not stepping up to say they do not need this help because it serves them no purpose to do so. If the EU wants to slap Apple, let them. It certainly doesn't harm any of these competitors (until they fall under the EU's watchful gaze).
In the EU's world, there is no difference between competitors. They want a level playing field, even if at an innovation level, one company is outperforming the other.
Ecosystem FTW
Apple has built a fantastic ecosystem that pays significant benefits to those who live inside of it. Yes, that does in a way define which smartwatch and earbuds I use. But, for more than 20 years, it had no impact on the laptop I carried. I was a dyed-in-the-wool Windows fan and even though I used an iPhone and AirPods, and I wore an Apple Watch, I saw no need to switch to a MacBook.
When I did make the switch, it was to see if I liked the macOS experience better than Windows (spoiler alert: I did), and, yes it turns out that there were instant benefits to the switch, like AirDrop access to files on my iPhone and iPad.
Everything is easier when you have all Apple products but that's not an unfair advantage, it's engineering and excellence. The EU would like to wipe that out and make Apple as average as possible so it's fair for everyone. But that's not fair to Apple and, honestly, not to you, the Apple user, either. You pay a premium for the best programming, the best products, and the best interoperability.
Everything is easier when you have all Apple products but that's not an unfair advantage, it's engineering and excellence.
You won't get that by mixing and matching some from Apple and some from, for instance, Samsung, even if the EU wants you to. I love many Samsung, Google, OnePlus, and Microsoft products and there is nothing wrong with a non-homogenous setup. There should not, however, be an issue with all-Apple-all-the-time.
The EU needs to step back and get out of the way of smart technology and only act when consumers are being harmed. There was no harm here, just some small companies whining because they weren't winning.
You might think this is an EU-only issue but remember that what starts in Europe usually flies over the Atlantic to the US and eventually all global markets. Put another way, when the EU sneezes, we all catch a cold.
Google Messages is improving its message-deleting features
You'll soon be able to delete a message for everyone
We now have screenshots showing how the feature works
It's not a great feeling, sending a text and then regretting it – instantly, the next morning, or any time in between – and Google Messages looks set to give users a safety net with the ability to remotely delete messages for everyone in a conversation.
This was first spotted last month, but now the folks at Android Authority have actually managed to get it working. This is based on some code digging done in the latest version of Google Messages for Android.
While the feature isn't live for everyone yet, the Android Authority team tweaked the app to get it to display some of the functionality. Deleting a text brings up a dialog asking if you just want to wipe your local copy of it or erase it for all recipients.
If an image is wiped, that brings up a "Message deleted" placeholder in the conversation for everyone who's participating. It seems as though there's a 15-minute window for deleting – so you'll need to be relatively quick.
Bring it back
This is how Google Messages will let you remotely delete RCS messages for everyone ✅ Detail and more screenshots - https://ift.tt/205fQxi #Android pic.twitter.com/cKHvqe1tmrMarch 19, 2025
The upgrade comes courtesy of RCS Universal Profile v2.7, which Google Messages is in the process of adding support for. The remote delete feature may not be available for devices with older software installed – so bear that in mind for your text chats.
Up until now, deleting a text only removed the message on your own phone. Once it had been delivered and downloaded on the recipient's device(s), there was nothing you could do to bring it back.
That will change when this update finally rolls out in full, though it's not clear exactly when that will be. Considering Android Authority has been able to access some of the screens that show the feature working, it shouldn't be too long now.
Support for this feature varies in other apps: WhatsApp lets you delete sent messages for all recipients, while iMessage lets you delete sent messages, but only your local copy (though you can unsend messages within a two-minute window).
Microsoft & Oracle absent from nuclear pledge signed by Amazon, Meta, and Google
It aims to triple nuclear capacity by 2050 to support global energy needs
Nuclear seen as key to powering AI-driven data centers with clean energy
Even through Microsoft is seriously exploring nuclear energy as a way to power its data centers – even signing a deal in 2024 to purchase energy from the restarted Three Mile Island (TMI) nuclear plant - it is notably absent from a new Large Energy Users Pledge that supports the global expansion of nuclear capacity.
That pledge has attracted major signatories such as Amazon, Meta, and Google, but neither Microsoft nor Oracle, which is also exploring nuclear energy, are on the list.
Led by the World Nuclear Association, the pledge was first introduced at the World Nuclear Symposium in September 2023, and has gained backing from 14 major global banks and financial institutions, 140 nuclear industry companies, and 31 countries.
Around-the-clock clean energy
Its purpose is to drive home nuclear energy’s “essential role in enhancing energy security, resiliency and providing continuous clean energy,” and sets a target to triple global nuclear capacity by 2050.
Nuclear power currently supplies about 9% of the world’s electricity via 439 reactors.
The call to action goes beyond traditional energy applications. It also outlines nuclear's potential to serve high-demand sectors like data centers, where the rise of artificial intelligence has led to soaring energy needs.
While it typically takes at least five years to construct an atomic plant, micro nuclear reactors, expected to be available by the early 2030s, could be a quicker, cheaper solution for powering large-scale computing operations.
"We are proud to sign a pledge in support of tripling nuclear capacity by 2050, as nuclear power will be pivotal in building a reliable, secure, and sustainable energy future," said Lucia Tian, Google’s Head of Clean Energy & Decarbonization Technologies.
"Google will continue to work alongside our partners to accelerate the commercialization of advanced nuclear technologies that can provide the around-the-clock clean energy necessary to meet growing electricity demand around the world."
That message was echoed by Urvi Parekh, Head of Global Energy at Meta. “As global economies expand, the need for a reliable, clean, and resilient energy supply is paramount. Nuclear energy, with its ability to provide continuous power, can help meet this rising demand. We’re excited to join alongside this multi-organizational effort with the Tripling Nuclear Pledge to reiterate our commitment to nuclear energy.”
Brandon Oyer, Head of Americas Energy and Water for AWS, emphasized the urgency of scaling nuclear power. “Accelerating nuclear energy development will be critical to strengthening our nation’s security, meeting future energy demands, and addressing climate change. Amazon supports the World Nuclear Association’s pledge, and is proud to have invested more than $1 billion over the last year in nuclear energy projects and technologies, which is part of our broader Climate Pledge commitment to be net-zero carbon by 2040.”
You can view the Large Energy Users Pledge, which is signed by Meta, Amazon, Google and ten other companies, with a statement of support by Siemens Energy, here (PDF).
Nvidia has taken the world a step closer to smart, humanoid robots with the launch of its latest advanced AI model.
At its Nvidia GTC 2025 event, the company revealed Isaac GROOT N1, which it says is, "the world’s first open Humanoid Robot foundation model", alongside several other important development tools.
Nvidia says its tools, which are available now, will make developing smarter and more functional robots easier than ever, along with allowing them to have more humanoid reasoning and skills - which doesn't sound terrifying at all.
Isaac GROOT N1
“The age of generalist robotics is here,” said Jensen Huang, founder and CEO of NVIDIA. “With NVIDIA Isaac GR00T N1 and new data-generation and robot-learning frameworks, robotics developers everywhere will open the next frontier in the age of AI.”
The company says its robotics work can help fill a shortfall of more than 50 million caused by a global labor shortage.
Nvidia says Isaac GROOT N1, which can be trained on real or synthetic data, can "easily" master tasks such as grasping, moving objects with either a single or multiple arms, and moving items from one arm to the other - but can also carry out multi-step tasks which combine a number of general skills.
The model is built across a dual-system architecture inspired by the principles of human cognition, with “System 1” is a fast-thinking action model, mirroring human reflexes or intuition, whereas “System 2” is a slow-thinking model for "deliberate, methodical decision-making."
Powered by a vision language model, System 2 is able to consider and analyze its environment, and the instructions it was given, to plan actions - which are then translated by System 1 into precise, continuous robot movements.
Among the other tools being released are a range of simulation frameworks and blueprints such as the NVIDIA Isaac GR00T Blueprint for generating synthetic data, which help generate large, detailed synthetic data sets needed for robot development which would be prohibitively expensive to gather in real life.
There is also Newton, an open source physics engine, created alongside Google DeepMind and Disney Research, which Nvidia says is purpose-built for developing robots.
Huang was joined on stage by Star Wars-inspired BDX droids during his GTC keynote, showing the possibilities of the technology in theme parks or other entertainment locations.
Nvidia first launched Project GROOT ("Generalist Robot 00 Technology") at GTC 2024, primarily focusing on industrial use cases, which could learn and become smarter by watching human behaviour, understanding natural language and emulating movements, allowing them to quickly learn coordination, dexterity and other skills in order to navigate, adapt and interact with the real world.
from Latest from TechRadar US in News,opinion https://ift.tt/ZjcuSKr
Amazon is turning off the ability to process voice requests locally. It's a seemingly major privacy pivot and one that some Alexa users might not appreciate. However, this change affects exactly three Echo devices and only if you actively enabled Do Not Send Voice Recordings in the Alexa app settings.
Right. It's potentially not that big of a deal and, to be fair, the level of artificial intelligence Alexa+ is promising, let alone the models it'll be using, all but precludes local processing. It's pretty much what Daniel Rausch, Amazon's VP of Alexa and Echo, told us when he explained that these queries would be encrypted, sent to the cloud, and then processed by Amazon's and partner Antrhopic's AI models at servers far, far away.
That's what's happening, but let's unpack the general freakout.
Amazon has since cleaned up its data act with encryption and, with this latest update, promises to delete your recordings from its servers.
A change for the few
(Image credit: Future)
This latest change, though, sounded like a step back because it takes away a consumer control, one that some might've been using to keep their voice data off Amazon's servers.
However, the vast majority of Echo devices out there aren't even capable of on-device voice processing, which is why most of them didn't even have this control.
A few years ago, Amazon published a technical paper on its efforts to bring "On-device speech processing" to Echo devices. They were doing so to put "processing on the edge," and reduce latency and bandwidth consumption.
Turns out it wasn't easy – Amazon described it as a massive undertaking. The goal was to put automatic speech recognition, whisper detection, and speech identification locally on a tiny, relatively low-powered smart speaker system. Quite a trick, considering that in the cloud, each process ran "on separate server nodes with their own powerful processors."
The paper goes into significant detail, but suffice it to say that Amazon developers used a lot of compression to get Alexa's relatively small AI models to work on local hardware.
It was always the cloud
In the end, the on-device audio processing was only available on those three Echo models, but there is a wrinkle here.
The specific feature Amazon is disabling, "Do Not Send Voice Recordings," never precluded your prompts from being handled in the Amazon cloud.
The processing power that these few Echos had was not to handle the full Alexa query locally. Instead, the silicon was used to recognize the wake word ("Alexa"), record the voice prompt, use voice recognition to make a text transcription of the prompt, and send that text to Amazon's cloud, where the AI acts on it and sends a response.
The local audio is then deleted.
Big models need cloud-based power
(Image credit: Amazon)
Granted, this is likely how everyone would want their Echo and Alexa experience to work. Amazon gets the text it needs but not the audio.
But that's not how the Alexa experience works for most Echo owners. I don't know how many people own those particular Echo models, but there are almost two dozen different Echo devices, and this affects just three of them.
Even if those are the most popular Echos, the change only affects people who dug into Alexa settings to enable "Do Not Send Voice Recordings." Most consumers are not making those kinds of adjustments.
This brings us back to why Amazon is doing this. Alexa+ is a far smarter and more powerful AI with generative, conversational capabilities. Its ability to understand your intentions may hinge not only on what you say, but your tone of voice.
It's true that even though your voice data will be encrypted in transit, it surely has to be decrypted in the cloud for Alexa's various models to interpret and act on it. Amazon is promising safety and security, and to be fair, when you talk to ChatGPT Voice and Gemini Live, their cloud systems are listening to your voice, too.
When we asked Amazon about the change, here's what they told us:
“The Alexa experience is designed to protect our customers’ privacy and keep their data secure, and that’s not changing. We’re focusing on the privacy tools and controls that our customers use most and work well with generative AI experiences that rely on the processing power of Amazon’s secure cloud. Customers can continue to choose from a robust set of tools and controls, including the option to not save their voice recordings at all. We’ll continue learning from customer feedback, and building privacy features on their behalf.”
For as long as the most impactful models remain too big for local hardware, this will be the reality of our Generative AI experience. Amazon is simply falling into line in preparation for Alexa+.
It's not great news, but also not the disaster and privacy and data safety nightmare it's made out to be.