Sunday, March 30, 2025

Latest Tech News

Look, I love Windows, I do, I really do. It's one of those things that I just can't live without at this point. I've tried MacOS, I've tried Linux, I've even dabbled in the world of Android and Chromebooks during my time, and yet, none of it compares to Windows; it just doesn't.

There's a certain amount of familiarity, of indoctrination into that Microsoft cult that's rife in me. I grew up using Windows 98, and onwards, it was what I gamed on, what I studied on, what I made lifelong friends on—you name it. 98, 2000, XP, Vista, 7, 8, 10, and finally we're here at Windows 11, at least until Microsoft inevitably tells us that its "final" operating system isn't its final operating system.

The thing is though, it really is a bag of spanners at times, and I've kinda developed this habit of going thermonuclear on my own machine at quite regular intervals over my lifetime.

Mostly by flattening and reinstalling Windows onto my PC every other month or so. Why? I'm glad you asked.

The need for an occasional refresh

Well, the thing is, although Windows gives you a lot of freedom and has broad compatibility with more programs than any other operating system out there, it does make it somewhat susceptible to bugs. Lots of them.

These can be inflicted by Microsoft directly through Windows Updates or drivers accidentally corrupting files or programs, or well, any number and manner of avenues.

The worst culprit, of course, is the classic "upgrade from the previous Windows version to this version." Just don't; it's never worth it.

A woman sitting in a chair looking at a Windows 11 laptop

Windows is great, but no operating system is designed to run perfectly forever. (Image credit: Microsoft)

See, registry files corrupt, file directories get mislabelled, and inevitably you'll end up with programs you forget about sitting in the background sucking up critical resources. It's just a bit crap like that, and ironically, although I do have a massive disdain towards macOS, I can't deny its closed-off ecosystem does avoid a lot of these pitfalls.

Whenever anyone asks me about a system bug or help with troubleshooting, my first and often instant reaction is to suggest just flattening the machine entirely and reinstalling a fresh version of Windows on top.

That's why I advocate tying a full-fat Windows license to your Microsoft account so you can easily reinstall and activate Windows 11 on your machine on a dime.

An arduous task

It does require some getting used to this salting-the-earth kind of strategy, but the benefits are just too great to ignore.

The first thing I recommend is splitting up your storage solution. In every build I've ever done, I've almost always recommended a two-storage drive system. The first and fastest of the two should be used as your main OS drive, and the second, usually slower, cheaper, and larger, being your media/games/back-up drive. Any valuable documents, assets, or big downloads live here.

What that allows you to do is keep all your games and important files on your D: drive, and then, whenever that re-install time comes a-calling, allow you to quickly flatten and re-install Windows on your C: drive.

If you've got slow internet or just can't be bothered to re-download everything, it is a huge time-saver doing it this way. You can get away with partitions, but it's far easier to accidentally delete the wrong one on your next Windows install.

Windows 11 Live Wallpaper Leaked Images

Laptop, desktop; it doesn't matter, just give your hardware an OS break now and then. (Image credit: Sergey Kisselev / Behance.net / Microsoft)

It also helps really reduce program and document clutter and encourages good back-up practice too. If you know you're going to flatten a machine every 2-3 months, then the likelihood is you'll keep all of your important files and documents safely stored in the cloud, or off-site, backed up with solid authentication procedures as well.

You'll end up with a minimal desktop that's stupidly rapid, clean, up-to-date, and as error-free as Microsoft can muster. If you're building a new PC or transferring an old one to updated hardware, save yourself the hassle and just back up and move your most important files, download a fresh USB Windows Installer, and get cracking. I promise you it's worth it.

A new lease on (virtual) life

With that, and good internet education and practice, plus a solid VPN, you can then dump aftermarket antivirus as well and rely on good ol' Windows Defender. It's one of the best antivirus programs out there, and lacks the resource vampirism many third-party solutions have.

Worst-case scenario, you get tricked into opening a dodgy email or land on an odd website, and your machine gets whacked with some crypto-scam; just flatten it. Job done. Although again, I'd highly recommend just being a bit more internet savvy first.

The only thing I'd say if you do go this route, be careful on the device you do it on and prep accordingly. Some motherboards won't support ethernet or wireless connectivity without drivers too.

Grab your USB stick, get the Windows Installer setup on it, and then stick a folder in it called DRIVERS. Head to your motherboard's product page, grab the relevant drivers, then once you're finally on the desktop, you should be able to install all your chipsets and drivers and get that internet connectivity back, no sweat.

If you do get stuck on the "need to connect to the internet" Windows 11 install page, hit Shift + F10, click the command window, type OOBE\BYPASSNRO, and hit enter. The installer will reboot, and you'll now have the option to tell Microsoft you "don't have the internet" and continue with the installation regardless.

So yeah, PSA complete. I got 99 problems, and most of them are Microsoft-related. At least for about 20 minutes anyway.



from Latest from TechRadar US in News,opinion https://ift.tt/Bd8OuqN

Saturday, March 29, 2025

The Final Season of 'The Righteous Gemstones' Is Here: When to Watch Episode 4 on Max

Hallelujah, the first three episodes of season 4 are now streaming.

from CNET https://ift.tt/PSiAKvL

Protein Intake Simplified: Learn Your Daily Needs With This Visual Guide

Getting your daily protein needs doesn't have to be difficult. This visual protein guide eliminates the guesswork.

from CNET https://ift.tt/H4yzNXM

I Get to Watch Disney Fireworks Every Day Thanks to My 3D Printer and Some Elbow Grease

With a 3D printer, a projector and a lot of pixie dust (spray paint), you too can bring Disneyland into your home.

from CNET https://ift.tt/AtcOfnM

Latest Tech News


  • Rubin Ultra GPUs previewed at Nvidia GTC 2025 with Kyber rack mockups
  • Each NVL576 rack may include 576 GPUs across four internal pods
  • Projected power draw reaches 600kW with performance targets of 15 EFLOPS

At Nvidia GTC 2025, the company gave a preview of what its future data center hardware could look like, showcasing mockups of its Rubin Ultra GPUs housed in the Kyber-based NVL576 racks.

These systems are expected to launch in the second half of 2027, and while that’s still some way off, Nvidia is already laying the groundwork for what it describes as the next phase of AI infrastructure.

A single NVL576 rack, according to Jensen Huang, co-founder, president, and CEO of Nvidia, could draw up to 600kW. That's five times more than the 120kW used by current Blackwell B200 racks, suggesting a steep rise in power per rack going forward.

Powering the future

Tom’s Hardware reports, "Each Rubin Ultra rack will consist of four 'pods,' each of which will deliver more computational power than an entire Rubin NVL144 rack. Each pod will house 18 blades, and each blade will support up to eight Rubin Ultra GPUs - along with two Vera CPUs, presumably, though that wasn't explicitly stated. That's 176 GPUs per pod, and 576 per rack."

The Kyber rack infrastructure will support these systems, along with upgraded NVLink modules which will have three next-generation NVLink connections each, compared to just two found in existing 1U rack-mount units.

The first Rubin NVL144 systems, launching in 2026, will rely on existing Grace Blackwell infrastructure. Rubin Ultra arrives in 2027 with far more density.

Tom’s Hardware says that the NVL576 racks are planned to deliver “up to 15 EFLOPS of FP4” in 2027, compared to 3.6 EFLOPS from next year's NVL144 racks.

During the GTC 2025 keynote, Jensen Huang said future racks could eventually require full megawatts of power, meaning 600kW may only be a stepping stone.

As power climbs toward the megawatt range, questions are inevitably growing about how future data centers will be powered.

Nuclear energy is one obvious answer - The likes of Amazon, Meta, and Google are part of a consortium that has pledged to triple nuclear output by 2050 (Microsoft and Oracle are notably missing for the moment) and mobile micro nuclear plants are expected to arrive in the 2030s.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/OCA8ben

Friday, March 28, 2025

April Fool's Day 2025 Pranks: Wearable Mattress, Cat Poo Scented Candle, Sports-Drink Shampoo

If you see a weird product this week and next, don't be so sure it's real.

from CNET https://ift.tt/wZ3oeIW

Best TV on a Budget for 2025

You don't have to spend a lot of cash to get a good TV. Here are our top picks for the best budget televisions from Samsung, Roku and more.

from CNET https://ift.tt/poXYGdU

Latest Tech News


  • Microsoft pulled out of a $12bn deal with CoreWeave, citing delays
  • OpenAI took over the contract, backed by Microsoft’s own investment funds
  • AI sector remains a closed loop driven by a few dominant players

CoreWeave is eyeing a huge (potentially $2.5 billion) IPO in the coming weeks, but it has also had a few unflattering news stories to contend with recently.

Jeffrey Emanuel, whose viral essay described Nvidia as overpriced and led to it losing $600 billion in a single day, has described CoreWeave as a turkey and called it the “WeWork of AI”.

More recently, Microsoft chose to walk away from a nearly $12 billion option to buy more data-center capacity from the AI hyperscaler.

OpenAI to the rescue

The Financial Times (FT) reported sources familiar with the matter saying Microsoft had withdrawn from some of its agreements “over delivery issues and missed deadlines” which shook the tech giant’s confidence in CoreWeave.

The FT added that despite this, Microsoft still had "a number of ongoing contracts with CoreWeave and it remained an important partner.”

Microsoft is CoreWeave’s biggest customer, and the AI hyperscaler refuted the FT's story, saying “All of our contractual relationships continue as planned – nothing has been cancelled, and no one has walked away from their commitments.”

Shortly after that news broke, it was reported that OpenAI would be taking up Microsoft's nearly $12 billion option instead, helping CoreWeave avoid a potentially embarrassing setback so near to its closely watched IPO.

Rohan Goswami at Semafor made a couple of interesting observations on the news, noting, “This isn’t a sign that Microsoft is pulling back on AI - “We’re good for our $80 billion,” Satya Nadella said on CNBC - but an indication that the company is being more tactical about exactly when and where it spends. At the same time, OpenAI’s biggest backer is Microsoft, meaning that OpenAI is paying CoreWeave with money that is largely Microsoft’s to begin with.”

He described this as the rub, saying, “The AI economy is currently a closed loop and will stay that way until a broader swath of economic actors like big and medium-sized companies start spending real dollars on AI software and services. Until then, nearly all the money is coming from a few companies - chiefly Nvidia and Microsoft - which themselves depend on the goodwill of their public shareholders to keep underwriting it all.”

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/QLSmk4P

Thursday, March 27, 2025

Nintendo's Allowing Digital Game Sharing: Here's What That Means and How It Works video

Virtual Game Cards are coming, and they'll work across systems and family accounts, and on both the Switch and Switch 2. Here's what we know so far.

from CNET https://ift.tt/txGp5Mg

If You Need Multiple Apple AirTags, This 4-Pack Is $30 Off for Amazon's Big Spring Sale

I use Apple AirTags to track pretty much everything I own. Right now, you can get a four-pack for nearly 30% off.

from CNET https://ift.tt/ZQGclWP

Latest Tech News


  • Ascent GX10 is Asus's take on Nvidia's DGX Spark AI supercomputer
  • ServeTheHome spotted the product at GTC 2025 and went hands on
  • The site took photos and noted the AI computer is lighter and cheaper

Nvidia has recently been showing off DGX Spark, its Mac Mini-sized AI supercomputer built around the GB10 Grace Blackwell Superchip.

Originally called Project Digits, the device has been created to bring advanced model development and inferencing directly to desktops. Although it looks like a mini PC, it’s incredibly powerful and designed to handle demanding AI workflows such as fine-tuning, inference, and prototyping without relying entirely on external infrastructure.

Aimed at developers, researchers, data scientists, and students working with increasingly complex AI models locally, it comes with 128GB of LPDDR5x unified memory and up to 4TB of NVMe SSD storage. The DGX Spark isn’t cheap at $3999, but if you’re looking to save some money without cutting corners, there are some alternatives.

The lighter choice

Dell’s Pro Max with GB10 and HP’s ZGX Nano AI Station are DGX Spark clones, built around the GB10 Grace Blackwell Superchip. Asus also has its own GB10 AI supercomputer clone, the Ascent GX10, which is priced at $2999, significantly less than Nvidia’s offering.

Shown off at Nvidia GTC 2025, (as, naturally, was Nvidia’s own DGX Spark), the Ascent GX10 comes with 128GB of unified memory, and the Blackwell GPU with fifth-generation Tensor Cores and FP4 precision support. While DGX Spark has 4TB of storage, Asus’s version only has 1TB.

ServeTheHome was at the conference and spotted the Ascent GX10 on Asus’s stand where it snapped a few photos of the product.

The site also noted, “The front of the system has the ASUS logo and a power button. This may sound strange, but ASUS using plastic on the outside of the chassis in parts versus Nvidia using more metal is an interesting trade-off. Nvidia DGX Spark feels in hand much more like the Apple Mac Studio from a density perspective while the Asus felt lighter. If you truly want this to be a portable AI box, then ASUS may have a leg up, especially if you want to cluster it.“

On the rear of the system, STH says there’s an HDMI port, four high-speed USB4 40Gbps ports, a 10GbE NIC for base networking, and a dual-port Nvidia ConnectX-7, which Nvidia described as an Ethernet version of the CX7 designed for RDMA clustering.

STH’s Patrick Kennedy noted, “For some context here, a Nvidia ConnectX-7 NIC these days often sells for $1500–2200 in single unit quantities, depending on the features and supply of the parts. At $2999 for a system with this built-in that is awesome. Our sense is that folks are going to quickly figure out how to cluster these beyond the 2-unit cluster that Nvidia is going to support at first.”

Nvidia GB10 motherboard

(Image credit: ServeTheHome)

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/yO7rhFd

Latest Tech News


  • There are reportedly no current plans for another iPhone mini
  • The last 'mini' model launched in September 2021
  • 1 in 5 TechRadar readers say sub-6 inches is the best phone size

The last 'mini' phone we saw from Apple was the 5.4-inch iPhone 13 mini, which launched in September 2021, and was replaced by the 6.7-inch iPhone 14 Plus – and it seems unlikely that Apple is going to bring back a smaller iPhone model any time soon.

Bloomberg reporter Mark Gurman, who is usually a reliable source when it comes to Apple, said in a live Q & A (via MacRumors) that Apple has "really shifted away" from smaller form factors and that its engineers "are not working on a smaller iPhone right now".

The current iPhone line-up comprises the 6.1-inch iPhone 16, the 6.7-inch iPhone 16 Plus, the 6.3-inch iPhone 16 Pro, and the 6.9-inch iPhone 16 Pro Max – so if you want anything smaller than 6.1 inches in terms of display size, you're out of luck.

Gurman did say Apple might one day reconsider their position if market pressures change, but don't expect anything for the foreseeable future. This year, the iPhone 16 Plus is predicted to be replaced by the iPhone 17 Air, possibly with the same screen size.

Should Apple reconsider?

Samsung Galaxy Z Flip 6 folded on a table

The Galaxy Z Flip 6 at least folds down to a small size (Image credit: Samsung)

Apple never said anything officially, but market reports suggested the iPhone 13 mini wasn't a great seller – which most likely sealed its fate. But according to many TechRadar readers, the iPhone 13 mini was the perfect size for a smartphone.

We ran a poll on the TechRadar WhatsApp channel asking you what your favorite phone screen size was. Top of the pile, with 241 votes out of 799 (31%), was the largest size besides foldables: phones 6.9 inches or bigger, such as the iPhone 16 Pro Max.

In second place, however, were phones with screens under 6 inches in size – like, for example, the iPhone 13 mini. This size got 171 votes (21%), but unfortunately for small phone fans, it's getting harder and harder to find more compact handsets.

Clearly, not enough of the people who like smaller-sized handsets went out and bought an iPhone 13 mini, and Apple has taken note. If you fall into that category, you could try a flip foldable like the Samsung Galaxy Z Flip 6 instead.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/OSasF0l

Wednesday, March 26, 2025

23andMe Files for Bankruptcy Protection: What Will Happen to Your Data?

What's scary is how much we don't know. If you're worried about data privacy, think about deleting your data now.

from CNET https://ift.tt/2SJHTMP

Latest Tech News

Thanks, Sam Altman, for giving us access to ChatGPT's new integrated image-generation skills. They're, as Steve Jobs might've described them, insanely good.

So good, in fact, that I'm worried now about my little corner of the universe where we try to discern the accuracy of renders, models, and pre-production leaks that might tell us the tale of future Apple products, like the rumored iPhone 17 Air.

For those who don't know, the iPhone 17 Air (or Slim) is the oft-talked-about but never-confirmed ultra-slim iPhone 16 Plus/iPhone 16e/SE hybrid that could be the most exciting iPhone update when Apple likely unveils a whole new iPhone 17 line in September.

While it may not be the most powerful iPhone, it should be the biggest and thinnest of the bunch. Even the single rear camera might not be enough to keep potential buyers away.

Imagining what it could look like, well, that's my job. Or it was until I started working with ChatGPT running the recently updated 4o model, which is capable of generating images out of thin air or based on photos and images you upload into it.

It's a slightly methodical model, taking up to 45 seconds to generate an image that flows in slowly, almost one microscopic, horizontal line of pixels at a time. The results are something to behold.

It's not just the quality but how ChatGPT can maintain the thread and cohesion of images from prompt to prompt. Usually, if you start with image generation in something like OpenAI's Dall-E or, say, X's Grok, it'll do a good job with the first image.

However, when you request changes, elements of the original disappear or end up altered. It's even harder to create a series of images that appear to be part of the same story or theme. There are usually too many differences.

ChatGPT 4o image generation appears different and, possibly, more capable.

Chat GPT-generated images along with source material

ChatGPT 4o did a nice anime glow up with my picture of a hawk (left). (Image credit: Chat GPT-generated images along with source material)

Having already experimented a bit with the model shortly after Altman and other OpenAI engineers announced it, I quickly found that ChatGPT 4o did its best work when you started with a solid source.

I initially had fun turning images of myself and even photos I took this week of a peregrine hawk into anime. However, I was curious about ChatGPT's photo-realism capabilities, especially as they relate to my work.

Apple announced this week that WWDC 2025's keynote would fall on June 9. It's an event where the tech giant outlines platform updates (iOS, iPadOS, macOS, etc) that inform much of how we think about Apple's upcoming product lineup. With information like this, we can start to map out the future of the anticipated iPhone 17 line. Visualizing what that will look like can be tough, though. So, I decided to let ChatCPT's newest image model show me the way.

Chat GPT-generated images along with source material

A real photo of an iPhone SE on the left and a ChatGPT 4o-generated one on the right. (Image credit: Chat GPT-generated images along with source material)

Since the iPhone 17 Air would conceivably be the newest member of the iPhone family (shoving aside the less-than exciting iPhone 16e), I decided to focus on that.

Initially, I handed ChatGPT an older iPhone SE review image with this prompt:

"Use this photo to imagine what an Apple iPhone 17 Air might look like. Please make it photo-realistic and a nice, bright color."

ChatGPT did a good job of maintaining the settings from the original photo and most of my hand, though I think I lost a finger. It did well updating the finish and even added a second camera, making it part of a raised camera bump.

I followed with this prompt:

"This is good. Since the iPhone 17 Air is supposed to be super-thin, can you show it from the side?"

ChatGPT lost the background and made the image look like an ad for the iPhone 17 Air. It was a nice touch, but the phone didn't look thin enough. I prompted ChatGPT to make it thinner, which it did.

This was progress, but I quickly realized my error. I hadn't based the prompt on available iPhone 17 Air rumors, and maybe I wasn't being prescriptive enough in my prompts.

Chat GPT-generated images along with source material

(Image credit: Chat GPT-generated images along with source material)

Since the iPhone SE is now a fully retired design, I decided to start over with a review image of the iPhone 16 Pro and initially used the same prompt, which delivered an iPhone 16 Pro in a lovely shade of blue.

This time, when I asked to see the thin side of the phone. I told ChatGPT, "Don't change the background."

I was pleased to see that ChatGPT more or less kept my backyard bushes intact and seamlessly inserted the new phone in something that now sort of looked like a more attractive version of my hand.

Chat GPT-generated images along with source material

My original iPhone 16 Pro review image is on the left. ChatGPT 4o's work in on the right. (Image credit: Chat GPT-generated images along with source material)

Some iPhone 17 Air rumors claim the phone might have just one camera, so I told ChatGPT to remove two cameras and rerender.

In previous prompts, I'd told ChatGPT to "make it thinner," but what if I gave the chatbot an exact measurement?

"Now show me the side of the iPhone 17 Air. It should be 5.4mm thick and the same color."

Chat GPT-generated images along with source material

(Image credit: Chat GPT-generated images along with source material)

This was almost perfect. I did notice, though, that there was no discernable camera bump, which seems unlikely in a 5.4mm-thick iPhone. Even the anticipated ultra-thin Samsung Galaxy S25 Edge features a camera bump. There is no way the iPhone 17 Air will get away without one.

Finally, I asked for a render of the screen:

"Now show me the iPhone 17 Air screen. Make sure it shows the Dynamic Island. The screen should be bright and look like an iPhone home screen with apps and widgets."

Once again, ChatGPT did an excellent job, except for an "iOS IAir" label just above the dock. The rest of the App Icon labels are perfect, which is impressive when you consider the difficulty most image generation models have with text.

ChatGPT doesn't produce images with AI watermarks; only the file names tell you these are ChatGPT images. That's concerning, as is the exceptional quality.

I expect the internet will soon be flooded with ChatGPT iPhone and other consumer electronics hardware renders. We won't know what's a leak, what's a hand-made render, or what's direct from the mind of ChatGPT based on prompts from one enterprising tech editor.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/NBDmpoT

Tuesday, March 25, 2025

WWDC 2025: Apple Confirms June 9 Date for Next Major Event

The tech giant is expected to reveal iOS 19 and other major software updates at its annual developer conference.

from CNET https://ift.tt/W2vPYe3

Heat Domes and Surging Grid Demand Threaten US Power Grids with Blackouts

A new report shows a sharp increase in peak electricity demand, leading to blackout concerns in multiple states. Here's how experts say ...