It's almost time for Amazon's second Prime Day of the year and the October event is nearer than ever. These are the best Apple deals we've found so far.
It is not uncommon for cars to feature in video games. After all, who doesn’t remember manically piloting a Ferrari Testarossa Spider in Sega’s iconic Outrun game?
But BMW has leveraged the power on online gaming’s massive audiences to introduce the uninitiated to its upcoming all-electric iX2 model. The German brand announced that it is the first to introduce a Car Creator in Fortnite.
To do so, BMW created a virtual city of the future with the island "Hypnopolis", which is fully explorable by Fortnite players, while a bespoke storyline has been created that (you guessed it) heavily revolves around the BMW iX2.
Cleverly, BMW has also digitally replicated and incorporated many of its architectural landmarks, such as the company headquarters (or BMW Welt), which are recognizable by the unique Four Cylinder design.
There’s a lot of talk about the "digital brand experience" and opening up "new dialogue opportunities with Next Gen target groups" by Stefan Ponikva, Vice President BMW Brand Communication and Brand Experience, who sees the move as bringing "the brand to life in the hands of the players”.
To take this notion to the next level, BMW has also introduced a neat Car Creator within its virtual world, which is located below the double cone in BMW Welt and is unlocked by completing various challenges. Here, players can design their own BMW iX2.
At first, the iX2 appears in "prototype disguise", with players are initially restricted to a limited palette of gamified paint schemes, rims or trunk contents. But BMW plans to launch the vehicle IRL later next week, whereby the official paint finishes, rims and interior options for the new BMW iX2 will then also be available within the Car Creator.
A taste of things to come?
We can expect BMW to experiment with more virtual experiences in the near future(Image credit: BMW/Epic Games)
Although leveraging Fortnite looks like a canny way to connect with new customers (particularly a younger audience), BMW is investing a lot of time, money and effort to increase its virtual experiences offering in general.
Recently, it hosted its BMW Group Supplierthon, which invited hundreds of companies working in the virtual space to effectively pitch their best ideas. Some of the winners will have their concepts implemented into products over the coming years.
One of the chosen innovations transforms an efficient driving style into an entertaining race against other BMW drivers, while Chinese start-up DeepMirror Inc's virtual spaces experience and Web3 Studio's expertise in blockchain development and design also caught the Bavarian brand's eye.
On top of this, the German brand partnered with Nvidia earlier this year to build the perfect digital twin of its future 400-hectare plant in Debrecen, allowing designers and engineers to make tweaks in the virtual world.
Adobe MAX 2023 is less than a week away, and to promote the event, the company recently published a video teasing its new “object-aware editing engine” called Project Stardust.
According to the trailer, the feature has the ability to identify individual objects in a photograph and instantly separate them into their own layers. Those same objects can then be moved around on-screen or deleted. Selecting can be done either manually or automatically via the Remove Distractions tool. The software appears to understand the difference between the main subjects in an image and the people in the background that you want to get rid of.
What’s interesting is moving or deleting something doesn’t leave behind a hole. The empty space is filled in most likely by a generative AI model. Plus, you can clean up any left-behind evidence of a deleted item. In its sample image, Adobe erases a suitcase held by a female model and then proceeds to edit her hand so that she’s holding a bouquet of flowers instead.
Image 1 of 2
(Image credit: Adobe)
Image 2 of 2
(Image credit: Adobe)
The same tech can also be used to change articles of clothing in pictures. A yellow down jacket can be turned into a black leather jacket or a pair of khakis into black jeans. To do this, users will have to highlight the piece of clothing and then enter what they want to see into a text prompt.
(Image credit: Adobe)
AI editor
Functionally, Project Stardust operates similarly to Google’s Magic Editor which is a generative AI tool present on the Pixel 8 series. The tool lets users highlight objects in a photograph and reposition them in whatever manner they please. It, too, can fill gaps in images by creating new pixels. However, Stardust feels much more capable. The Pixel 8 Pro’s Magic Eraser can fill in gaps, but neither it nor Magic Editor can’t generate content. Additionally, Google’s version requires manual input whereas Adobe’s software doesn’t need it.
Seeing these two side-by-side, we can’t but wonder if Stardust is actually powered by Google’s AI tech. Very recently, the two companies announced they were entering a partnership “and offering a free three-month trial for Photoshop on the web for people who buy a Chromebook Plus device. Perhaps this “partnership” runs a lot deeper than free Photoshop considering how similar Stardust is to Magic Editor.
Impending reveal
We should mention that Stardust isn't perfect. If you look at the trailer, you'll notice some errors like random holes in the leather jacket and strange warping around the flower model's hands. But maybe what we see is Stardust in an early stage.
There is still a lot we don’t know like whether it's a standalone app or will it be housed in, say, Photoshop? Is Stardust releasing in beta first or are we getting the final version? All will presumably be answered on October 10 when Adobe MAX 2023 kicks off. What’s more, the company will be showing other “AI features” coming to “Firefly, Creative Cloud, Express, and more.”
Be sure to check out TechRadar’s list of the best Photoshop courses online for 2023 if you’re thinking of learning the software, but don’t know where to start.
The Google Pixel 8 event didn't deliver any massive surprises, thanks to the huge number of leaks we've seen recently. But as is tradition, Google did use its big annual phone launch to reveal an array of new camera tricks that are equal parts impressive, useful, and downright creepy.
The actual camera hardware of the Google Pixel 8 and Pixel 8 Pro isn't anything particularly earth-shattering. The Pixel 8's rear cameras are largely unchanged from the Pixel 7, while the Pro version does get bigger upgrades with a new main camera, an improved ultra-wide, and a 48MP telephoto lens.
But it's been a while since sensor size and lens apertures were the biggest drivers of smartphone camera performance. These days, it's all about computational photography (and video) tricks, an art form that Google has pioneered. So what new modes did we get this year?
Quite a few actually, with Google's focus very much on video, in the form of Audio Magic Eraser and (for the Pixel 8 Pro, at least) new Video Boost and Night Sight Video features. But thanks to the combined powers of Google's Tensor G3 chip and some Google Photos algorithms, we also saw some powerful (and potentially controversial) new photography tricks in the form of Best Take and Zoom Enhance.
Here's a full breakdown of all of those new Pixel 8 and Pixel 8 camera tricks, starting with the one we're feeling most conflicted about.
1. Best Take
Pixel 8 and Pixel 8 Pro
(Image credit: Google)
Let's start with what is arguably the most controversial new Pixel camera feature because it effectively lets you change the facial expressions of people in your group shots. Are you saying we aren't photogenic, Google?
The key thing is that Best Take isn't using generative AI to change a frown into a smile – instead, it takes a series of photos and then lets you pick the best facial expressions for your final shot.
That makes it far more palatable to those who think AI is ruining photography, as it's effectively just doing an automated Photoshop-style blend on a burst of shots. And in our early demos, it's surprisingly effective, with little sign of the uncanny valley giveaways we expected. But this is one we'll want to test to destruction before risking it on our wedding snaps.
2. Video Boost
Pixel 8 Pro only
(Image credit: Google)
Google went particularly hard on new video features at its Made by Google event – and the biggest one was arguably Video Boost, which is coming to the Pixel 8 Pro in December.
In theory, this is computational video done properly – rather than messing about with trying to introduce fake bokeh like the iPhone's Cinematic Mode (which Apple has gone very quiet on), the Pixel 8 Pro's new mode instead processes every video frame using its cloud-based HDR Plus pipeline.
This is a huge technological feat and one that will involve a little wait for your boosted video. But the results could also be polarizing. Google was keen to show side-by-sides of Video Boost with the iPhone 15 Pro Max's video, pointing to its improved dynamic range and vivid color.
This saturated HDR look isn't necessarily to everyone's taste, though, so it could be one to reserve for particular situations (like high-contrast scenes).
3. Night Sight Video
Pixel 8 Pro only
(Image credit: Google)
Google's Night Sight has been a hugely influential computational photography trick, and now it's coming properly to videos on the Pixel 8 Pro, from December.
Night Sight Video is effectively a low-light version of Video Boost, using multi-frame processing to enhance detail and exposure in dark scenes. Google claims the mode is the "best low-light video on any smartphone", which it says is based on third-party evaluation comparing major US smartphone brands.
While Google has announced a version of Night Sight for videos before at Google IO 2021, this effectively just stitched photos together to make an animation. We don't yet know what resolution and frame rates Night Sight Video is available for, but we're looking forward to taking it for a spin around a wintery London.
4. Audio Magic Eraser
PIxel 8 and Pixel 8 Pro
(Image credit: Google)
Audio quality has long been an afterthought in smartphone video, but Google's aiming to change that on the Pixel 8 and Pixel 8 Pro with a new 'computational audio' trick called Audio Magic Eraser.
This uses machine learning to recognize and divide the audio in your video into separate channels – for example, speech, crowd, wind, noise, and music. You can then turn off any unwanted ones.
Google's demo of a baby talking with a dog's loud background barking removed was impressive, but we'll be keen to test this in the field to see how much it impacts the quality of those individual sound layers.
5. Magic Editor
Pixel 8 and Pixel 8 Pro
(Image credit: Google)
Back in May, Google announced that Magic Editor (a new generative AI trick that aims to make bad photos obsolete) was en route to select Pixel phones later this year. Well, now we know that those phones are the Pixel 8 and Pixel 8 Pro.
Magic Editor is effectively Magic Eraser on steroids, letting you pick and move objects in your photos, reposition them, and effectively become a Photoshop whizz without having to go anywhere near masking tools and adjustment layers.
It even gives you contextual suggestions on things to change, like swapping out your grey sky for a golden-hour sunset. Some will call it the death of photography. Others will see it as a massive time-saving crutch. Either way, the "experimental" feature will now be available in Google Photos on its latest Pixel phones.
6. Zoom Enhance
Pixel 8 Pro only
(Image credit: Google)
Google seems to take great delight in making sci-fi concepts an unsettling reality –see its Call Screen feature, which sees an AI robot interview the person calling you to see if they're worthy of being put through to the real you. Another slightly less chilling, but equally impressive feature, is Zoom Enhance.
Yes, the CSI:Miami 'Enhance' meme is going to get a few more hits today, as the Google Photos feature is the closest we've seen to a real-world equivalent – kind of. Because it's powered by generative AI, Zoom Enhance will very much invent some extra detail when you pinch to zoom into a photo.
That isn't a million miles from how interpolation works, though we doubt it'll stand up in court. Still, it does look like another impressive photographic trick to add to the PIxel 8 Pro's armory – and it even gives you an 'Enhance' button so you can pretend you're in a detective drama.
7. Pro Controls
Pixel 8 Pro only
(Image credit: Google)
Apple's reluctance to build pro-friendly camera controls into its Pro phones has always been a bit baffling, but that's what Google has done on the Pixel 8 Pro. Its new Pro Controls lets you tweak settings like ISO sensitivity, shutter speed, and focus.
That promises to be a particularly big bonus for video shooters, though Google doesn't go anywhere near as far as phones like the Sony Xperia 1 V – which has three separate camera apps (Photo Pro, Video Pro, and Cinema Pro). Still, more control is always good if you don't want to rely on one of the best camera apps.
Google Pixel 8 Pro's Best Take technology is powerful enough to become an AI-based social situation fixer. I have proof.
True story: At a recent family function, a dozen of us sat at a long table and smiled at a smartphone camera while one person looked away after one shot and started to eat, as one does.
I do not begrudge her the much-needed snack but as I perused my more than half-dozen pictures, there was only one with her smiling at the camera and, taking inventory of the rest of the faces, it was not the best moment for everyone else.
Now, I wish I'd had the Google Pixel 8 Pro on me. Unveiled Wednesday in New York City alongside the Google Pixel 8, and Google Pixel Watch 2, the new flagship Android 14 phone includes a new Tensor G3 CPU that is powerful and packs a significant helping of onboard AI. Among its smart photo tricks is something called Best Take.
Put simply, Best Take can comb through a sequence of group pictures, find the faces, and let you replace each one with each person's most picture-friendly expression.
Google Pixel 8 Pro is ready to suggest the best photo(Image credit: Future)
Up until today, I'd only seen canned images produced by the Best Take technology. At Google's Made By Google event, however, Google finally let me experience the power of Best Take for myself.
In a well-lit studio space inside Google's Pier 57 West Side headquarters, Google sat me down with a trio of models. The instructions from the Google representative were simple: Try a number of different facial expressions as he called out when he was taking each shot.
Obviously, this is not exactly how you might normally capture a sequence of group photos. Often not everyone is paying attention and you're usually not calling out "And, taking a shot..." over and over again.
Google Pixel 8 Pro Best Take lives under tools.(Image credit: Future)
In any case, I played along, and after seating myself in the middle of this good-looking bunch, I listened as the Google rep called out and took group photos with the Google Pixel 8 Pro. Each time, I changed my expression, making sure that one image was of me smiling gleefully at the Pixel 8 Pro's 50MP main camera. The models also complied with their own silly, serious, and distracted mugs.
If you select "Suggestions", you'll see the Pixel 8 Pro's own choice for the best group shot. But we wanted to use Best Take, so we opened the photo and then selected Tools. Under that, we selected Best Take. Below our image, we saw this message, "Finding similar shots to improve photo" which meant the system was going through our sequence of six or seven shots looking for the faces and best expressions.
Google Pixel 8 Pro Best Take finds all the faces and their various expressions.(Image credit: Future)
While Best Take does not use facial recognition, the system does understand what a face is. Inside the tool, we could see all the faces collected in the photos. To register a face for potential swapping, they have to be free of obstructions. A plant or hands in front of a face will make it impossible to include as a face option.
When we selected my face, I saw three options. One was a nice smile, another looked dead serious, and the last was a smirk. With a tap, we could swap in the different faces. As my head switched, the models' faces around me remained the same. More importantly, my face swap did not look like it was done by a ransom note writer. Aside from the expressions, I could not see the stitching between my new face/head and unchanging body.
Next, we selected the face of the model next to me and swapped her face until we found the best expression.
Image 1 of 4
(Image credit: Future)
Image 2 of 4
(Image credit: Future)
Image 3 of 4
(Image credit: Future)
Image 4 of 4
(Image credit: Future)
If any of us had radically altered our poses, say, moved our shoulders 90 degrees this way or that, Best Take would've discarded those images from the sequence. Best Take also won't work if you wait 10 minutes between shots. Too much will have changed (lighting, poses, etc.) for a believable face swap. Also, thankfully, there's no option to swap your face with someone else's.
Image 1 of 4
(Image credit: Future)
Image 2 of 4
(Image credit: Future)
Image 3 of 4
(Image credit: Future)
Image 4 of 4
(Image credit: Future)
What's remarkable to me is that Best Take does its AI work locally. Google isn't sending your sequence of photos to its Tensor Processor Unit-filled cloud. It's all done on the Pixel 8 Pro and is a surprisingly fast process.
The other bit of good news is that Best Take is non-destructive. It keeps all the photos and if you look at metadata – as we did – you can see a little dash and number next to the file name to indicate this is a Best Take image and not the original.
As new tech experiences go, this was a best-case scenario with great lighting and subjects who always followed instructions. Even with a sequence of photos and the ability to swap heads, my real-world scenario might have proven challenging to Best Take.
On the other hand, I say bring on the next group dinner and hand me a Google Pixel 8 Pro because I am ready to try again.
Sleep trackers give you important insight that you can use to get better sleep. We tested the best sleep trackers on the market and these five made the list.
With demand for enterprise-grade large language models (LLMs) surging over the last year or so, Lamini has opened the doors to its LLM Superstation powered by AMD’s Instinct MI GPUs.
The firm claims it’s been running LLMs on over 100 AMD instinct GPUs in secret for the last year in production situations – even before ChatGPT launched. With its LLM Superstation, it’s opening the doors to more potential customers to run their models on its infrastructure.
These platforms are powered by AMD Instinct MI210 and MI250 accelerators, as opposed to the industry-leading Nvidia H100 GPUs which. By opting for its AMD GPUs, Lamini quips, businesses “can stop worrying about the 52-week lead time”.
AMD vs Nvidia GPUs for LLMs
Although Nvidia’s GPUs – including the H100 and A100 – are those most commonly in use to power LLMs such as ChatGPT, AMD’s own hardware is comparable.
For example, the Instinct MI250 offers up to 362 teraflops of computing power for AI workloads, with the MI250X pushing this do 383 teraflops. The Nvidia A100 GPU, by way of contrast, offers up to 312 teraflops of computing power, according to TechRadar Pro sister site Tom’s Hardware.
"Using Lamini software, ROCm has achieved software parity with CUDA for LLMs,” said Lamini CTO Greg Diamos, who is also the cofounder of MLPerf. “We chose the Instinct MI250 as the foundation for Lamini because it runs the biggest models that our customers demand and integrates finetuning optimizations.
“We use the large HBM capacity (128GB) on MI250 to run bigger models with lower software complexity than clusters of A100s."
(Image credit: Lamini)
AMD’s GPUs can, in theory, certainly compete with Nvidia’s. But the real crux is availability, with systems such as Lamini’s LLM Superstation able to offer enterprises the opportunity to take on workloads immediately.
There’s also the question mark, however, over AMD’s next-in-line GPU, the MI300. Businesses are currently able to sample the MI300A now, while the MI300X is being sampled in the coming months.
According to Tom’s Hardware, the MI300X offers up to 192GB memory, which is double the H100, although we don’t yet fully know what the compute performance looks like. Nevertheless, it’s certainly set to be comparable to the H100. What would give Lamini’s LLM Superstation a real boost is building and offering its infrastructure powered by these next-gen GPUs.
Seemingly not content with one price hike (arguably two) in recent years, Netflix is reportedly set to increase the cost of its streaming service across the board again.
The Wall Street Journal (WSJ) states in a new report the company is currently “discussing raising prices in several markets globally” with the rollout likely to start in the US and Canada. It’s unknown how much the bump will be nor does The Wall Street Journal know when it’ll exactly begin. Netflix is keeping its lips sealed as it has refused multiple inquiries from media outlets. However, the WSJ says the streaming platform plans on raising costs “ a few months after the continuing Hollywood actors strike ends”.
Imminent hike
At the time of this writing, SAG-AFTRA (Screen Actors Guild – American Federation of Television and Radio Artists) is still on strike although it may end soon. The guild has recently met with the heads of Hollywood’s four major production studios as both sides attempt to make a deal. Assuming the protest does end sooner than expected, this could put the Netflix price hike sometime in early 2024 as the platform continues to drive up revenue.
That last one could be considered a second increase as it forced users to get either the more expensive Standard tier or the Standard with ads subscription to keep their service.
Analysis: The next gamble
Why, we wonder, is Netflix enacting a fourth strategy after the strike ends?
It's possible the company has been wanting to implement another increase but couldn’t find a good reason to justify it to its subscribers. One online theory we've seen floating around argues that Netflix will use the returning actors and writers from the WGA (Writers Guild of America) strike as justification for the hike. With all the new demands and content, the platform can claim it will need to gather more money from users to pay for everything. Even though the WGA’s calculations reveal the updated “contract will amount to just 0.2 percent of Netflix’s annual revenue.”
Now for the next question: will the upcoming hike be too much for subscribers? When the password-sharing crackdown began, debates raged online claiming, “This is the death of Netflix”. However, the opposite happened: the service’s subscriber count grew 236 percent. The gamble paid off for them – at least, in the short term. It could be a different story in the long haul. Rising costs may finally prompt people to start canceling their subscriptions en masse. Netflix might end up with egg on its face, and realize it took things too far.
Of course, we don't know, but we'll definitely keep an eye on things as they develop.
Currently, Apple and Google are engaged in a lucrative partnership that nets the former around $8 billion a year. This partnership involves Apple sending its sizable userbase (think billions!) to Google Search and, in return, it receives a commission from Google’s search ad revenue. Though Apple benefits tremendously, Google still needs Apple to promote the search engine and maintain its market dominance.
This arrangement means that Apple gets to enjoy several advantages, like having free resources to improve its non-web search capabilities, as well as having the world’s most effective bargaining chip when it comes to price negotiations with Google.
Despite this, Apple’s policy has always been to “own the core technologies underlying its products,” as Bloomberg puts it. If the tech giant ever took advantage of its knowledge and close proximity to Google’s search engine, it would wield great search power and get to keep more ad revenue. It wouldn’t have to match Google’s ability to sell advertising and search slots. Making its own in-house engine would be more than enough to increase revenue.
(Image credit: Yan Krukov / Pexels)
However, Apple has been developing its own search engine for years now. According to Bloomberg, John Giannandrea, a former Google executive who’s now in charge of machine learning and AI at Apple, has been running a search engine team for several years. The engine is codenamed ‘Pegasus,’ and it’s essentially a search engine for Apple’s own apps that’ll be making its way to more of them, including the App Store.
There’s also Spotlight, an engine that lets users find features and tools across their devices. Recently, Apple added web search support to Spotlight, which helped users find answers to their questions by pointing them to various sites.
Apple's web search has been powered by both Microsoft Bing and Google in the past. Giannandrea and his team have also been looking to integrate Apple’s search features into iOS and macOS while enhancing that tech with generative AI tools.
There’s also tech that Apple already uses to enhance its search tools’ capabilities, including Applebot. It works by scouring the internet and indexing websites for more accurate search results, providing its users with more websites through Siri and Spotlight.
Apple also has an advertising technology team, something that puts it much closer to Google if the former ever decides to actually create its own full-fledged search engine.
(Image credit: Future)
What’s stopping the massive tech giant? It seems that Apple has no desire to make its own search engines. Its partnership with Google works best, at least according to Eddy Cue, Apple’s senior vice president of services.
Cue is most likely on the money, as Apple tends to keep partnerships that serve its best interests. If Apple ever decided to fully invest in a search engine, it would simply do so, similar to how Apple partnered with Intel for years for its chips, until it didn’t and began designing its own silicon in-house.
A search engine is its own behemoth, one that Apple is probably not interested in tackling anytime soon.