Friday, March 28, 2025

April Fool's Day 2025 Pranks: Wearable Mattress, Cat Poo Scented Candle, Sports-Drink Shampoo

If you see a weird product this week and next, don't be so sure it's real.

from CNET https://ift.tt/wZ3oeIW

Best TV on a Budget for 2025

You don't have to spend a lot of cash to get a good TV. Here are our top picks for the best budget televisions from Samsung, Roku and more.

from CNET https://ift.tt/poXYGdU

Latest Tech News


  • Microsoft pulled out of a $12bn deal with CoreWeave, citing delays
  • OpenAI took over the contract, backed by Microsoft’s own investment funds
  • AI sector remains a closed loop driven by a few dominant players

CoreWeave is eyeing a huge (potentially $2.5 billion) IPO in the coming weeks, but it has also had a few unflattering news stories to contend with recently.

Jeffrey Emanuel, whose viral essay described Nvidia as overpriced and led to it losing $600 billion in a single day, has described CoreWeave as a turkey and called it the “WeWork of AI”.

More recently, Microsoft chose to walk away from a nearly $12 billion option to buy more data-center capacity from the AI hyperscaler.

OpenAI to the rescue

The Financial Times (FT) reported sources familiar with the matter saying Microsoft had withdrawn from some of its agreements “over delivery issues and missed deadlines” which shook the tech giant’s confidence in CoreWeave.

The FT added that despite this, Microsoft still had "a number of ongoing contracts with CoreWeave and it remained an important partner.”

Microsoft is CoreWeave’s biggest customer, and the AI hyperscaler refuted the FT's story, saying “All of our contractual relationships continue as planned – nothing has been cancelled, and no one has walked away from their commitments.”

Shortly after that news broke, it was reported that OpenAI would be taking up Microsoft's nearly $12 billion option instead, helping CoreWeave avoid a potentially embarrassing setback so near to its closely watched IPO.

Rohan Goswami at Semafor made a couple of interesting observations on the news, noting, “This isn’t a sign that Microsoft is pulling back on AI - “We’re good for our $80 billion,” Satya Nadella said on CNBC - but an indication that the company is being more tactical about exactly when and where it spends. At the same time, OpenAI’s biggest backer is Microsoft, meaning that OpenAI is paying CoreWeave with money that is largely Microsoft’s to begin with.”

He described this as the rub, saying, “The AI economy is currently a closed loop and will stay that way until a broader swath of economic actors like big and medium-sized companies start spending real dollars on AI software and services. Until then, nearly all the money is coming from a few companies - chiefly Nvidia and Microsoft - which themselves depend on the goodwill of their public shareholders to keep underwriting it all.”

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/QLSmk4P

Thursday, March 27, 2025

Nintendo's Allowing Digital Game Sharing: Here's What That Means and How It Works video

Virtual Game Cards are coming, and they'll work across systems and family accounts, and on both the Switch and Switch 2. Here's what we know so far.

from CNET https://ift.tt/txGp5Mg

If You Need Multiple Apple AirTags, This 4-Pack Is $30 Off for Amazon's Big Spring Sale

I use Apple AirTags to track pretty much everything I own. Right now, you can get a four-pack for nearly 30% off.

from CNET https://ift.tt/ZQGclWP

Latest Tech News


  • Ascent GX10 is Asus's take on Nvidia's DGX Spark AI supercomputer
  • ServeTheHome spotted the product at GTC 2025 and went hands on
  • The site took photos and noted the AI computer is lighter and cheaper

Nvidia has recently been showing off DGX Spark, its Mac Mini-sized AI supercomputer built around the GB10 Grace Blackwell Superchip.

Originally called Project Digits, the device has been created to bring advanced model development and inferencing directly to desktops. Although it looks like a mini PC, it’s incredibly powerful and designed to handle demanding AI workflows such as fine-tuning, inference, and prototyping without relying entirely on external infrastructure.

Aimed at developers, researchers, data scientists, and students working with increasingly complex AI models locally, it comes with 128GB of LPDDR5x unified memory and up to 4TB of NVMe SSD storage. The DGX Spark isn’t cheap at $3999, but if you’re looking to save some money without cutting corners, there are some alternatives.

The lighter choice

Dell’s Pro Max with GB10 and HP’s ZGX Nano AI Station are DGX Spark clones, built around the GB10 Grace Blackwell Superchip. Asus also has its own GB10 AI supercomputer clone, the Ascent GX10, which is priced at $2999, significantly less than Nvidia’s offering.

Shown off at Nvidia GTC 2025, (as, naturally, was Nvidia’s own DGX Spark), the Ascent GX10 comes with 128GB of unified memory, and the Blackwell GPU with fifth-generation Tensor Cores and FP4 precision support. While DGX Spark has 4TB of storage, Asus’s version only has 1TB.

ServeTheHome was at the conference and spotted the Ascent GX10 on Asus’s stand where it snapped a few photos of the product.

The site also noted, “The front of the system has the ASUS logo and a power button. This may sound strange, but ASUS using plastic on the outside of the chassis in parts versus Nvidia using more metal is an interesting trade-off. Nvidia DGX Spark feels in hand much more like the Apple Mac Studio from a density perspective while the Asus felt lighter. If you truly want this to be a portable AI box, then ASUS may have a leg up, especially if you want to cluster it.“

On the rear of the system, STH says there’s an HDMI port, four high-speed USB4 40Gbps ports, a 10GbE NIC for base networking, and a dual-port Nvidia ConnectX-7, which Nvidia described as an Ethernet version of the CX7 designed for RDMA clustering.

STH’s Patrick Kennedy noted, “For some context here, a Nvidia ConnectX-7 NIC these days often sells for $1500–2200 in single unit quantities, depending on the features and supply of the parts. At $2999 for a system with this built-in that is awesome. Our sense is that folks are going to quickly figure out how to cluster these beyond the 2-unit cluster that Nvidia is going to support at first.”

Nvidia GB10 motherboard

(Image credit: ServeTheHome)

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/yO7rhFd

Latest Tech News


  • There are reportedly no current plans for another iPhone mini
  • The last 'mini' model launched in September 2021
  • 1 in 5 TechRadar readers say sub-6 inches is the best phone size

The last 'mini' phone we saw from Apple was the 5.4-inch iPhone 13 mini, which launched in September 2021, and was replaced by the 6.7-inch iPhone 14 Plus – and it seems unlikely that Apple is going to bring back a smaller iPhone model any time soon.

Bloomberg reporter Mark Gurman, who is usually a reliable source when it comes to Apple, said in a live Q & A (via MacRumors) that Apple has "really shifted away" from smaller form factors and that its engineers "are not working on a smaller iPhone right now".

The current iPhone line-up comprises the 6.1-inch iPhone 16, the 6.7-inch iPhone 16 Plus, the 6.3-inch iPhone 16 Pro, and the 6.9-inch iPhone 16 Pro Max – so if you want anything smaller than 6.1 inches in terms of display size, you're out of luck.

Gurman did say Apple might one day reconsider their position if market pressures change, but don't expect anything for the foreseeable future. This year, the iPhone 16 Plus is predicted to be replaced by the iPhone 17 Air, possibly with the same screen size.

Should Apple reconsider?

Samsung Galaxy Z Flip 6 folded on a table

The Galaxy Z Flip 6 at least folds down to a small size (Image credit: Samsung)

Apple never said anything officially, but market reports suggested the iPhone 13 mini wasn't a great seller – which most likely sealed its fate. But according to many TechRadar readers, the iPhone 13 mini was the perfect size for a smartphone.

We ran a poll on the TechRadar WhatsApp channel asking you what your favorite phone screen size was. Top of the pile, with 241 votes out of 799 (31%), was the largest size besides foldables: phones 6.9 inches or bigger, such as the iPhone 16 Pro Max.

In second place, however, were phones with screens under 6 inches in size – like, for example, the iPhone 13 mini. This size got 171 votes (21%), but unfortunately for small phone fans, it's getting harder and harder to find more compact handsets.

Clearly, not enough of the people who like smaller-sized handsets went out and bought an iPhone 13 mini, and Apple has taken note. If you fall into that category, you could try a flip foldable like the Samsung Galaxy Z Flip 6 instead.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/OSasF0l

Wednesday, March 26, 2025

23andMe Files for Bankruptcy Protection: What Will Happen to Your Data?

What's scary is how much we don't know. If you're worried about data privacy, think about deleting your data now.

from CNET https://ift.tt/2SJHTMP

Latest Tech News

Thanks, Sam Altman, for giving us access to ChatGPT's new integrated image-generation skills. They're, as Steve Jobs might've described them, insanely good.

So good, in fact, that I'm worried now about my little corner of the universe where we try to discern the accuracy of renders, models, and pre-production leaks that might tell us the tale of future Apple products, like the rumored iPhone 17 Air.

For those who don't know, the iPhone 17 Air (or Slim) is the oft-talked-about but never-confirmed ultra-slim iPhone 16 Plus/iPhone 16e/SE hybrid that could be the most exciting iPhone update when Apple likely unveils a whole new iPhone 17 line in September.

While it may not be the most powerful iPhone, it should be the biggest and thinnest of the bunch. Even the single rear camera might not be enough to keep potential buyers away.

Imagining what it could look like, well, that's my job. Or it was until I started working with ChatGPT running the recently updated 4o model, which is capable of generating images out of thin air or based on photos and images you upload into it.

It's a slightly methodical model, taking up to 45 seconds to generate an image that flows in slowly, almost one microscopic, horizontal line of pixels at a time. The results are something to behold.

It's not just the quality but how ChatGPT can maintain the thread and cohesion of images from prompt to prompt. Usually, if you start with image generation in something like OpenAI's Dall-E or, say, X's Grok, it'll do a good job with the first image.

However, when you request changes, elements of the original disappear or end up altered. It's even harder to create a series of images that appear to be part of the same story or theme. There are usually too many differences.

ChatGPT 4o image generation appears different and, possibly, more capable.

Chat GPT-generated images along with source material

ChatGPT 4o did a nice anime glow up with my picture of a hawk (left). (Image credit: Chat GPT-generated images along with source material)

Having already experimented a bit with the model shortly after Altman and other OpenAI engineers announced it, I quickly found that ChatGPT 4o did its best work when you started with a solid source.

I initially had fun turning images of myself and even photos I took this week of a peregrine hawk into anime. However, I was curious about ChatGPT's photo-realism capabilities, especially as they relate to my work.

Apple announced this week that WWDC 2025's keynote would fall on June 9. It's an event where the tech giant outlines platform updates (iOS, iPadOS, macOS, etc) that inform much of how we think about Apple's upcoming product lineup. With information like this, we can start to map out the future of the anticipated iPhone 17 line. Visualizing what that will look like can be tough, though. So, I decided to let ChatCPT's newest image model show me the way.

Chat GPT-generated images along with source material

A real photo of an iPhone SE on the left and a ChatGPT 4o-generated one on the right. (Image credit: Chat GPT-generated images along with source material)

Since the iPhone 17 Air would conceivably be the newest member of the iPhone family (shoving aside the less-than exciting iPhone 16e), I decided to focus on that.

Initially, I handed ChatGPT an older iPhone SE review image with this prompt:

"Use this photo to imagine what an Apple iPhone 17 Air might look like. Please make it photo-realistic and a nice, bright color."

ChatGPT did a good job of maintaining the settings from the original photo and most of my hand, though I think I lost a finger. It did well updating the finish and even added a second camera, making it part of a raised camera bump.

I followed with this prompt:

"This is good. Since the iPhone 17 Air is supposed to be super-thin, can you show it from the side?"

ChatGPT lost the background and made the image look like an ad for the iPhone 17 Air. It was a nice touch, but the phone didn't look thin enough. I prompted ChatGPT to make it thinner, which it did.

This was progress, but I quickly realized my error. I hadn't based the prompt on available iPhone 17 Air rumors, and maybe I wasn't being prescriptive enough in my prompts.

Chat GPT-generated images along with source material

(Image credit: Chat GPT-generated images along with source material)

Since the iPhone SE is now a fully retired design, I decided to start over with a review image of the iPhone 16 Pro and initially used the same prompt, which delivered an iPhone 16 Pro in a lovely shade of blue.

This time, when I asked to see the thin side of the phone. I told ChatGPT, "Don't change the background."

I was pleased to see that ChatGPT more or less kept my backyard bushes intact and seamlessly inserted the new phone in something that now sort of looked like a more attractive version of my hand.

Chat GPT-generated images along with source material

My original iPhone 16 Pro review image is on the left. ChatGPT 4o's work in on the right. (Image credit: Chat GPT-generated images along with source material)

Some iPhone 17 Air rumors claim the phone might have just one camera, so I told ChatGPT to remove two cameras and rerender.

In previous prompts, I'd told ChatGPT to "make it thinner," but what if I gave the chatbot an exact measurement?

"Now show me the side of the iPhone 17 Air. It should be 5.4mm thick and the same color."

Chat GPT-generated images along with source material

(Image credit: Chat GPT-generated images along with source material)

This was almost perfect. I did notice, though, that there was no discernable camera bump, which seems unlikely in a 5.4mm-thick iPhone. Even the anticipated ultra-thin Samsung Galaxy S25 Edge features a camera bump. There is no way the iPhone 17 Air will get away without one.

Finally, I asked for a render of the screen:

"Now show me the iPhone 17 Air screen. Make sure it shows the Dynamic Island. The screen should be bright and look like an iPhone home screen with apps and widgets."

Once again, ChatGPT did an excellent job, except for an "iOS IAir" label just above the dock. The rest of the App Icon labels are perfect, which is impressive when you consider the difficulty most image generation models have with text.

ChatGPT doesn't produce images with AI watermarks; only the file names tell you these are ChatGPT images. That's concerning, as is the exceptional quality.

I expect the internet will soon be flooded with ChatGPT iPhone and other consumer electronics hardware renders. We won't know what's a leak, what's a hand-made render, or what's direct from the mind of ChatGPT based on prompts from one enterprising tech editor.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/NBDmpoT

Tuesday, March 25, 2025

WWDC 2025: Apple Confirms June 9 Date for Next Major Event

The tech giant is expected to reveal iOS 19 and other major software updates at its annual developer conference.

from CNET https://ift.tt/W2vPYe3

Latest Tech News

Apple might want to put a camera or two on your next Apple Watch, ostensibly to assist its AI in interpreting your environment and, perhaps, acting on your behalf: "There's a hill up ahead! You might want to accelerate your running pace, but watch out for that puddle; it might be a pothole!"

That sounds useful, but do we need a smartwatch to do a job best left to our eyes? You'll see that hill, you'll take note of the puddle, and subconsciously plan a route around it. Why would you need a camera on your wrist?

Forgive me if I am a bit against the whole concept of a wearable camera. I think that unless you're a police officer who has to record all their interactions with the public (see The Rookie for details), a chest-bound camera is a bad idea. I think most Humane AI Pin wearers (and Humane AI) quickly discovered this.

Cameras on glasses aren't as bad, perhaps because they're so close to your eyes where you are looking at and making mental notes about what you see anyway. There are privacy concerns though, and when I've worn Ray-Ban Meta Smart Glasses, I've had a few people ask if I'm recording them. There's a little light on the frame that tells them as much, but I get the concern. No one wants to be recorded or have their picture taken without their explicit permission.

Never a good idea

We've seen cameras on smartwatches before. Back in 2013, Samsung unveiled the beefy Samsung Galaxy Gear, which I wore and reviewed. Samsung's idea for an on-wrist camera was, shall I say, unusual.

Instead of integrating the camera into the smart watch's body, Samsung stuffed it into the wristband. This was one bad idea on top of another. By placing the camera on the wristband, it forced you to position your wrist just right to capture a photo, using the smartwatch display as a viewfinder. Moreover, there was concern about damaging the wristband, which could lead to ruining the 2MP camera. It took, by the way, just passable photos.

Apple's apparent idea for a smartwatch camera is less about capturing a decent photo and more about ambient awareness. Information that one or more cameras can glean about your environment could inform Apple Intelligence – assuming Apple Intelligence is, by then, what Apple's been promising all along.

Powerful AI works best with data, both training to build the models and real time for analysis by those same models. Our best iPhones and best smartwatches are full of sensors that tell these devices where they are, where they're going, how fast they're moving, and if you've taken a fall or been in a car crash while carrying or wearing them. The watch has no camera, and your phone does not use its camera to build a data picture unless you ask it to.

Currently, you can squeeze your Camera Control button on the iPhone 16 and enable Visual Intelligence. This lets you take a picture and ask ChatGPT or Google Search to analyze it.

An eye on your wrist

A camera on your smartwatch, though, might always be on and trying, even as you pump your arms during a brisk run, to tell you about what's around and in front of you.

It might be looking at the people running toward you, and could possibly identify people on the fly, assuming it can get a clear enough shot. The watch could then connect to your phone or AirPods and identify people: "That's Bob Smith. According to his LinkedIn, he works in real estate." I'm not sure how those other people would feel about that, though.

I get that some of this sounds very cool and futuristic, but are we really meant to know that much about everything around us? Wouldn't it be better to explore what we want to with our eyes and ignore the rest? Exactly how much information can a human take?

It needs this but...

There are no guarantees that this will happen. It's just a rumor from Bloomberg News, but it makes sense.

It's high time for Apple to do the first truly significant Apple Watch redesign in a decade. Apple also needs some exciting new technology to remind people it can still innovate. Plus, more hardware sensors open the door to more powerful Apple Intelligence, and with all the recent missteps in that space, Apple is in dire need of an AI win.

I'm fine with all of that, as long as it does not involve putting cameras on my Apple Watch.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/j514KU0

Monday, March 24, 2025

Best Internet Providers in Staten Island, New York

CNET's connectivity experts have found the best ISPs in Staten Island -- top plans for speed, price and reliable coverage.

from CNET https://ift.tt/fg5vbj6

Latest Tech News


  • Nvidia’s DGX Station is powered by the GB300 Grace Blackwell Ultra
  • OEMs are making their own versions – Dell’s is the Pro Max with GB300
  • HP’s upcoming GB300 workstation will be the ZGX Fury AI Station G1n

Nvidia has unveiled two DGX personal AI supercomputers powered by its Grace Blackwell platform.

The first of these is DGX Spark (previously called Project Digits), a compact AI supercomputer that runs on Nvidia’s GB10 Grace Blackwell Superchip.

The second is DGX Station, a supercomputer-class workstation that resembles a traditional tower and is built with the Nvidia GB300 Grace Blackwell Ultra Desktop Superchip.

Dell and HP reveal their versions

The GB300 features the latest-generation Tensor Cores and FP4 precision, and the DGX Station includes 784GB of coherent memory space for large-scale training and inferencing workloads, connected to a Grace CPU via NVLink-C2C.

The DGX Station also features the ConnectX-8 SuperNIC, designed to supercharge hyperscale AI computing workloads.

Nvidia’s OEM partners - Asus, HP, and Dell - are producing DGX Spark rivals powered by the same GB10 Superchip. HP and Dell are also preparing competitors to the DGX Station using the GB300.

Dell has shared new details about its upcoming AI workstation, the Pro Max with GB300 (its DGX Spark version is called Pro Max with GB10).

The specs for its supercomputer-class workstation include 784GB of unified memory, up to 288GB of HBM3e GPU memory, and 496GB of LPDDR5X memory for the CPU.

The system delivers up to 20,000 TOPS of FP4 compute performance, making it well suited for training and inferencing LLMs with hundreds of billions of parameters.

HP’s version of the DGX Station is called the ZGX Fury AI Station G1n. Z by HP is now one of the company’s product lines, and the “n” at the end of the name signifies that it’s powered by an Nvidia processor - in this case, the GB300.

HP says the ZGX Fury AI Station G1n “provides everything needed for AI teams to build, optimize, and scale models while maintaining security and flexibility,” noting that it will integrate into HP’s broader AI Station ecosystem, alongside the previously announced ZGX Nano AI Station G1n (its DGX Spark alternative).

HP is also expanding its AI software tools and support offerings, providing resources designed to streamline workflow productivity and enhance local model development.

Pricing for the DGX Station and the Dell and HP workstations isn’t known yet, but they obviously aren’t going to be cheap. Pricing for the tiny DGX Spark starts at $3,999, and the larger machines will cost significantly more.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/suEya50

Sunday, March 23, 2025

Today's NYT Connections Hints, Answers and Help for March 24, #652

Hints and answers for Connections for March 24, No. 652.

from CNET https://ift.tt/6zfHPhc

Frankenstein Fraud: How to Protect Yourself Against Synthetic Identity Fraud

Criminals can stitch together pieces of your personal data to create an entirely new identity. Here's how to stop them.

from CNET https://ift.tt/R4Moub1

Heat Domes and Surging Grid Demand Threaten US Power Grids with Blackouts

A new report shows a sharp increase in peak electricity demand, leading to blackout concerns in multiple states. Here's how experts say ...