Wednesday, March 26, 2025

Latest Tech News

Thanks, Sam Altman, for giving us access to ChatGPT's new integrated image-generation skills. They're, as Steve Jobs might've described them, insanely good.

So good, in fact, that I'm worried now about my little corner of the universe where we try to discern the accuracy of renders, models, and pre-production leaks that might tell us the tale of future Apple products, like the rumored iPhone 17 Air.

For those who don't know, the iPhone 17 Air (or Slim) is the oft-talked-about but never-confirmed ultra-slim iPhone 16 Plus/iPhone 16e/SE hybrid that could be the most exciting iPhone update when Apple likely unveils a whole new iPhone 17 line in September.

While it may not be the most powerful iPhone, it should be the biggest and thinnest of the bunch. Even the single rear camera might not be enough to keep potential buyers away.

Imagining what it could look like, well, that's my job. Or it was until I started working with ChatGPT running the recently updated 4o model, which is capable of generating images out of thin air or based on photos and images you upload into it.

It's a slightly methodical model, taking up to 45 seconds to generate an image that flows in slowly, almost one microscopic, horizontal line of pixels at a time. The results are something to behold.

It's not just the quality but how ChatGPT can maintain the thread and cohesion of images from prompt to prompt. Usually, if you start with image generation in something like OpenAI's Dall-E or, say, X's Grok, it'll do a good job with the first image.

However, when you request changes, elements of the original disappear or end up altered. It's even harder to create a series of images that appear to be part of the same story or theme. There are usually too many differences.

ChatGPT 4o image generation appears different and, possibly, more capable.

Chat GPT-generated images along with source material

ChatGPT 4o did a nice anime glow up with my picture of a hawk (left). (Image credit: Chat GPT-generated images along with source material)

Having already experimented a bit with the model shortly after Altman and other OpenAI engineers announced it, I quickly found that ChatGPT 4o did its best work when you started with a solid source.

I initially had fun turning images of myself and even photos I took this week of a peregrine hawk into anime. However, I was curious about ChatGPT's photo-realism capabilities, especially as they relate to my work.

Apple announced this week that WWDC 2025's keynote would fall on June 9. It's an event where the tech giant outlines platform updates (iOS, iPadOS, macOS, etc) that inform much of how we think about Apple's upcoming product lineup. With information like this, we can start to map out the future of the anticipated iPhone 17 line. Visualizing what that will look like can be tough, though. So, I decided to let ChatCPT's newest image model show me the way.

Chat GPT-generated images along with source material

A real photo of an iPhone SE on the left and a ChatGPT 4o-generated one on the right. (Image credit: Chat GPT-generated images along with source material)

Since the iPhone 17 Air would conceivably be the newest member of the iPhone family (shoving aside the less-than exciting iPhone 16e), I decided to focus on that.

Initially, I handed ChatGPT an older iPhone SE review image with this prompt:

"Use this photo to imagine what an Apple iPhone 17 Air might look like. Please make it photo-realistic and a nice, bright color."

ChatGPT did a good job of maintaining the settings from the original photo and most of my hand, though I think I lost a finger. It did well updating the finish and even added a second camera, making it part of a raised camera bump.

I followed with this prompt:

"This is good. Since the iPhone 17 Air is supposed to be super-thin, can you show it from the side?"

ChatGPT lost the background and made the image look like an ad for the iPhone 17 Air. It was a nice touch, but the phone didn't look thin enough. I prompted ChatGPT to make it thinner, which it did.

This was progress, but I quickly realized my error. I hadn't based the prompt on available iPhone 17 Air rumors, and maybe I wasn't being prescriptive enough in my prompts.

Chat GPT-generated images along with source material

(Image credit: Chat GPT-generated images along with source material)

Since the iPhone SE is now a fully retired design, I decided to start over with a review image of the iPhone 16 Pro and initially used the same prompt, which delivered an iPhone 16 Pro in a lovely shade of blue.

This time, when I asked to see the thin side of the phone. I told ChatGPT, "Don't change the background."

I was pleased to see that ChatGPT more or less kept my backyard bushes intact and seamlessly inserted the new phone in something that now sort of looked like a more attractive version of my hand.

Chat GPT-generated images along with source material

My original iPhone 16 Pro review image is on the left. ChatGPT 4o's work in on the right. (Image credit: Chat GPT-generated images along with source material)

Some iPhone 17 Air rumors claim the phone might have just one camera, so I told ChatGPT to remove two cameras and rerender.

In previous prompts, I'd told ChatGPT to "make it thinner," but what if I gave the chatbot an exact measurement?

"Now show me the side of the iPhone 17 Air. It should be 5.4mm thick and the same color."

Chat GPT-generated images along with source material

(Image credit: Chat GPT-generated images along with source material)

This was almost perfect. I did notice, though, that there was no discernable camera bump, which seems unlikely in a 5.4mm-thick iPhone. Even the anticipated ultra-thin Samsung Galaxy S25 Edge features a camera bump. There is no way the iPhone 17 Air will get away without one.

Finally, I asked for a render of the screen:

"Now show me the iPhone 17 Air screen. Make sure it shows the Dynamic Island. The screen should be bright and look like an iPhone home screen with apps and widgets."

Once again, ChatGPT did an excellent job, except for an "iOS IAir" label just above the dock. The rest of the App Icon labels are perfect, which is impressive when you consider the difficulty most image generation models have with text.

ChatGPT doesn't produce images with AI watermarks; only the file names tell you these are ChatGPT images. That's concerning, as is the exceptional quality.

I expect the internet will soon be flooded with ChatGPT iPhone and other consumer electronics hardware renders. We won't know what's a leak, what's a hand-made render, or what's direct from the mind of ChatGPT based on prompts from one enterprising tech editor.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/NBDmpoT

Tuesday, March 25, 2025

WWDC 2025: Apple Confirms June 9 Date for Next Major Event

The tech giant is expected to reveal iOS 19 and other major software updates at its annual developer conference.

from CNET https://ift.tt/W2vPYe3

Latest Tech News

Apple might want to put a camera or two on your next Apple Watch, ostensibly to assist its AI in interpreting your environment and, perhaps, acting on your behalf: "There's a hill up ahead! You might want to accelerate your running pace, but watch out for that puddle; it might be a pothole!"

That sounds useful, but do we need a smartwatch to do a job best left to our eyes? You'll see that hill, you'll take note of the puddle, and subconsciously plan a route around it. Why would you need a camera on your wrist?

Forgive me if I am a bit against the whole concept of a wearable camera. I think that unless you're a police officer who has to record all their interactions with the public (see The Rookie for details), a chest-bound camera is a bad idea. I think most Humane AI Pin wearers (and Humane AI) quickly discovered this.

Cameras on glasses aren't as bad, perhaps because they're so close to your eyes where you are looking at and making mental notes about what you see anyway. There are privacy concerns though, and when I've worn Ray-Ban Meta Smart Glasses, I've had a few people ask if I'm recording them. There's a little light on the frame that tells them as much, but I get the concern. No one wants to be recorded or have their picture taken without their explicit permission.

Never a good idea

We've seen cameras on smartwatches before. Back in 2013, Samsung unveiled the beefy Samsung Galaxy Gear, which I wore and reviewed. Samsung's idea for an on-wrist camera was, shall I say, unusual.

Instead of integrating the camera into the smart watch's body, Samsung stuffed it into the wristband. This was one bad idea on top of another. By placing the camera on the wristband, it forced you to position your wrist just right to capture a photo, using the smartwatch display as a viewfinder. Moreover, there was concern about damaging the wristband, which could lead to ruining the 2MP camera. It took, by the way, just passable photos.

Apple's apparent idea for a smartwatch camera is less about capturing a decent photo and more about ambient awareness. Information that one or more cameras can glean about your environment could inform Apple Intelligence – assuming Apple Intelligence is, by then, what Apple's been promising all along.

Powerful AI works best with data, both training to build the models and real time for analysis by those same models. Our best iPhones and best smartwatches are full of sensors that tell these devices where they are, where they're going, how fast they're moving, and if you've taken a fall or been in a car crash while carrying or wearing them. The watch has no camera, and your phone does not use its camera to build a data picture unless you ask it to.

Currently, you can squeeze your Camera Control button on the iPhone 16 and enable Visual Intelligence. This lets you take a picture and ask ChatGPT or Google Search to analyze it.

An eye on your wrist

A camera on your smartwatch, though, might always be on and trying, even as you pump your arms during a brisk run, to tell you about what's around and in front of you.

It might be looking at the people running toward you, and could possibly identify people on the fly, assuming it can get a clear enough shot. The watch could then connect to your phone or AirPods and identify people: "That's Bob Smith. According to his LinkedIn, he works in real estate." I'm not sure how those other people would feel about that, though.

I get that some of this sounds very cool and futuristic, but are we really meant to know that much about everything around us? Wouldn't it be better to explore what we want to with our eyes and ignore the rest? Exactly how much information can a human take?

It needs this but...

There are no guarantees that this will happen. It's just a rumor from Bloomberg News, but it makes sense.

It's high time for Apple to do the first truly significant Apple Watch redesign in a decade. Apple also needs some exciting new technology to remind people it can still innovate. Plus, more hardware sensors open the door to more powerful Apple Intelligence, and with all the recent missteps in that space, Apple is in dire need of an AI win.

I'm fine with all of that, as long as it does not involve putting cameras on my Apple Watch.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/j514KU0

Monday, March 24, 2025

Best Internet Providers in Staten Island, New York

CNET's connectivity experts have found the best ISPs in Staten Island -- top plans for speed, price and reliable coverage.

from CNET https://ift.tt/fg5vbj6

Latest Tech News


  • Nvidia’s DGX Station is powered by the GB300 Grace Blackwell Ultra
  • OEMs are making their own versions – Dell’s is the Pro Max with GB300
  • HP’s upcoming GB300 workstation will be the ZGX Fury AI Station G1n

Nvidia has unveiled two DGX personal AI supercomputers powered by its Grace Blackwell platform.

The first of these is DGX Spark (previously called Project Digits), a compact AI supercomputer that runs on Nvidia’s GB10 Grace Blackwell Superchip.

The second is DGX Station, a supercomputer-class workstation that resembles a traditional tower and is built with the Nvidia GB300 Grace Blackwell Ultra Desktop Superchip.

Dell and HP reveal their versions

The GB300 features the latest-generation Tensor Cores and FP4 precision, and the DGX Station includes 784GB of coherent memory space for large-scale training and inferencing workloads, connected to a Grace CPU via NVLink-C2C.

The DGX Station also features the ConnectX-8 SuperNIC, designed to supercharge hyperscale AI computing workloads.

Nvidia’s OEM partners - Asus, HP, and Dell - are producing DGX Spark rivals powered by the same GB10 Superchip. HP and Dell are also preparing competitors to the DGX Station using the GB300.

Dell has shared new details about its upcoming AI workstation, the Pro Max with GB300 (its DGX Spark version is called Pro Max with GB10).

The specs for its supercomputer-class workstation include 784GB of unified memory, up to 288GB of HBM3e GPU memory, and 496GB of LPDDR5X memory for the CPU.

The system delivers up to 20,000 TOPS of FP4 compute performance, making it well suited for training and inferencing LLMs with hundreds of billions of parameters.

HP’s version of the DGX Station is called the ZGX Fury AI Station G1n. Z by HP is now one of the company’s product lines, and the “n” at the end of the name signifies that it’s powered by an Nvidia processor - in this case, the GB300.

HP says the ZGX Fury AI Station G1n “provides everything needed for AI teams to build, optimize, and scale models while maintaining security and flexibility,” noting that it will integrate into HP’s broader AI Station ecosystem, alongside the previously announced ZGX Nano AI Station G1n (its DGX Spark alternative).

HP is also expanding its AI software tools and support offerings, providing resources designed to streamline workflow productivity and enhance local model development.

Pricing for the DGX Station and the Dell and HP workstations isn’t known yet, but they obviously aren’t going to be cheap. Pricing for the tiny DGX Spark starts at $3,999, and the larger machines will cost significantly more.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/suEya50

Sunday, March 23, 2025

Today's NYT Connections Hints, Answers and Help for March 24, #652

Hints and answers for Connections for March 24, No. 652.

from CNET https://ift.tt/6zfHPhc

Frankenstein Fraud: How to Protect Yourself Against Synthetic Identity Fraud

Criminals can stitch together pieces of your personal data to create an entirely new identity. Here's how to stop them.

from CNET https://ift.tt/R4Moub1

Latest Tech News


  • Asus' new Ascent GX10 brings AI supercomputing power directly to developers
  • Promises 1000 TOPS of AI processing and can handle models up to 200 billion parameters
  • It's cheaper than Nvidia DGX Spark, with less storage but similar performance

AI development is getting ever more demanding, and Asus wants to bring high-performance computing straight to the desks of developers, researchers, and data scientists with the Ascent GX10, a compact AI supercomputer powered by Nvidia’s Grace Blackwell GB10 Superchip.

Asus’s rival to Nvidia’s DGX Spark (previously Project Digits) is designed to handle local AI workloads, making it easier to prototype, fine-tune, and run impressively large models without relying entirely on cloud or data center resources.

The Ascent GX10 comes with 128GB of unified memory, and the Blackwell GPU with fifth-generation Tensor Cores and FP4 precision support means it can deliver up to 1000 TOPS of AI processing power. It also includes a 20-core Grace Arm CPU, which speeds up data processing and orchestration for AI inferencing and model tuning. Asus says it will allow developers to work with AI models of up to 200 billion parameters without running into major bottlenecks.

Powerful yet compact

“AI is transforming every industry, and the Asus Ascent GX10 is designed to bring this transformative power to every developer’s fingertips,” said KuoWei Chao, General Manager of Asus IoT and NUC Business Group.

“By integrating the Nvidia Grace Blackwell Superchip, we are providing a powerful yet compact tool that enables developers, data scientists, and AI researchers to innovate and push the boundaries of AI right from their desks.”

Asus has built the GX10 with NVLink-C2C, which provides more than five times the bandwidth of PCIe 5.0, allowing the CPU and GPU to share memory efficiently, improving performance across AI workloads.

The system also comes with an integrated ConnectX network interface, so two GX10 units can be linked together to handle even larger models, such as Llama 3.1 with 405 billion parameters.

Asus says the Ascent GX10 will be available for pre-order in Q2 2025. Pricing details have not yet been confirmed by Asus, but Nvidia says it will cost $2999 and come with 1TB of storage.

In comparison, Nvidia’s own DGX Spark is a thousand dollars more ($3999) and comes with 4TB of storage.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/qQOodPw

Saturday, March 22, 2025

Best Internet Providers in Pensacola, Florida

If you are looking for fast and reliable internet in Pensacola, consider these options.

from CNET https://ift.tt/lQfhuG6

What's the Future of FAFSA and Financial Aid if the Department of Education Closes?

President Trump wants to shift federal student aid to the Small Business Administration, but experts say it's not that simple.

from CNET https://ift.tt/PiWVt3l

Best Facial Sunscreens of 2025, Tested and Chosen From 50 Top Brands

Your skin is an important organ and you need to keep it protected from the harsh UV rays of the sun if you don't want wrinkles. Here are the best sunscreens, picked by our experts.

from CNET https://ift.tt/BK8ZFNY

Latest Tech News


  • HP ZBook Fury G1i is a powerful 18-inch mobile workstation
  • It's powered by up to an Intel Core Ultra 9 285HX and next-gen Nvidia RTX graphics
  • There's also a 16-inch model available with same high-end specs and features

It’s a personal preference, but I’ve always liked laptops with bigger screens. That means 16-inches for me, but HP thinks 18-inch laptops are what professionals should be aiming for if they are looking to replace their desktop PCs and get a solid productivity boost.

Billed as the world’s most powerful 18-inch mobile workstation, the HP ZBook Fury G1i 18” still manages to fit into a 17-inch backpack.

That extra 2-inches gives you roughly 30% more space to work with, which can come in handy when handling complex datasets, editing high-resolution media, or working across multiple windows.

Three-fan cooling

HP is pitching the laptop at developers and data scientists who need to train and run LLMs directly on the machine.

The Fury G1i 18” runs on Intel’s latest Core Ultra processors, up to the top-end Core Ultra 9 285HX, with peak speeds of 5.5GHz. These chips also include an NPU with up to 13 TOPS of AI performance. HP says the machine will support next-gen Nvidia RTX GPUs.

There’s support for up to 192GB of DDR5 memory and up to 16TB of PCIe Gen5 NVMe storage. Connectivity includes Thunderbolt 5, HDMI 2.1, USB-A ports, an SD card slot, and Ethernet.

The 18-inch display has a WQXGA (2560x1600) resolution, coupled with a fast 165Hz refresh rate, trading pixel density for smoother motion. Thermal performance is handled by a redesigned three-fan cooling system, along with HP’s Vaporforce tech, allowing up to 200W TDP without throttling under sustained load.

Other features include a spill-resistant RGB-backlit keyboard, four Poly Studio speakers, dual-array microphones, and an optional IR camera for facial login.

The Fury G1i is also available in a 16-inch model for anyone who feels 18-inches is too big to lug around. Pricing and availability details for both models is expected shortly.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/VN4dClF

Friday, March 21, 2025

Best Internet Providers in St. Paul, Minnesota

If you're looking for fiber in St. Paul, you've got it. There are also lots of other great budget and speed options.

from CNET https://ift.tt/B4wpPad

Latest Tech News


  • AMD targets Nvidia’s Blackwell with upcoming Instinct MI355X accelerator
  • Oracle plans massive 30,000-unit MI355X cluster for high-performance AI workloads
  • That’s in addition to Stargate, Oracle’s 64,000-GPU Nvidia GB200 cluster

While AI darling Nvidia continues to dominate the AI accelerator market, with a share of over 90%, its closest rival, AMD, is hoping to challenge the Blackwell lineup with its new Instinct MI355X series of GPUs.

The MI355X, now expected to arrive by mid-2025, is manufactured on TSMC’s 3nm node and built on AMD's new CDNA 4 architecture. It will feature 288GB of HBM3E memory, bandwidth of up to 8TB/sec, and support for FP6 and FP4 low-precision computing, positioning it as a strong rival to Nvidia’s Blackwell B100 and B200.

In 2024, we reported on a number of big wins for AMD, which included shipping thousands of its MI300X AI accelerators to Vultr, a leading privately-held cloud computing platform, and to Oracle. Now, the latter has announced plans to build a cluster of 30,000 MI355X AI accelerators.

Stargate

This latest news was revealed during Oracle’s recent Q2 2025 earnings call, where Larry Ellison, Chairman and Chief Technology Officer, told investors, “In Q3, we signed a multi-billion dollar contract with AMD to build a cluster of 30,000 of their latest MI355X GPUs.”

Although he didn’t go into further detail beyond that, Ellison did talk about Project Stargate, saying, “We are in the process of building a gigantic 64,000 GPU liquid-cooled Nvidia GB200 cluster for AI training.”

He later added, “Stargate looks to be the biggest AI training project out there, and we expect that will allow us to grow our RPO even higher in the coming quarters. And we do expect our first large Stargate contract fairly soon.”

When questioned further about Stargate by a Deutsche Bank analyst, Ellison gave a reply that could just as easily apply to the cluster of MI355X AI accelerators Oracle is planning to build.

"The capability we have is to build these huge AI clusters with technology that actually runs faster and more economically than our competitors. So it really is a technology advantage we have over them. If you run faster and you pay by the hour, you cost less. So that technology advantage translates to an economic advantage which allows us to win a lot of these huge deals,” he said.

Ellison also touched on Oracle’s data center strategy, saying, “So, we can start our data centers smaller than our competitors and then we grow based on demand. Building these data centers is expensive, and they’re really expensive if they’re not full or at least half full. So we tend to start small and then add capacity as demand arises.”

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/uAE2KWd

Thursday, March 20, 2025

Watch UEFA Nations League Soccer: Livestream Netherlands vs. Spain From Anywhere

La Roja look to maintain their unbeaten record in the tournament as they travel to Rotterdam for this quarterfinal clash.

from CNET https://ift.tt/q3mRnuT

Latest Tech News

Some TV shows are like comfort food, and for me, there’s no show more comforting than Peep Show. The British sitcom from the early 2000s h...