Friday, April 26, 2024

Latest Tech News

A new interview with the director behind the viral Sora clip Air Head has revealed that AI played a smaller part in its production than was originally claimed. 

Revealed by Patrick Cederberg (who did the post-production for the viral video) in an interview with Fxguide, it has now been confirmed that OpenAI's text-to-video program was far from the only force involved in its production. The 1-minute and 21-second clip was made with a combination of traditional filmmaking techniques and post-production editing to achieve the look of the final picture.

Air Head was made by ShyKids and tells the short story of a man with a literal balloon for a head. While there's human voiceover utilized, from the way OpenAI was pushing the clip on social channels such as YouTube, it certainly left the impression that the visuals were was purely powered by AI, but that's not entirely true. 

As revealed in the behind-the-scenes clip, a ton of work was done by ShyKids who took the raw output from Sora and helped to clean it up into the finished product. This included manually rotoscoping the backgrounds, removing the faces that would occasionally appear on the balloons, and color correcting. 

Then there's the fact that Sora takes a ton of time to actually get things right. Cederberg explains that there were "hundreds of generations at 10 to 20 seconds a piece" which were then tightly edited in what the team described as a "300:1" ratio of what was generated versus what was primed for further touch-ups. 

Such manual work also included editing out the head which would appear and reappear, and even changing the color of the balloon itself which would appear red instead of yellow. While Sora was used to generate the initial imagery with good results, there was clearly a lot more happening behind the scenes to make the finished product look as good as it does, so we're still a long way out from instantly-generated movie-quality productions. 

Sora remains tightly under wraps save for a handful of carefully curated projects that have been allowed to surface, with Air Head among the most popular. The clip has over 120,000 views at the time of writing, with OpenAI touting as "experimentation" with the program, downplaying the obvious work that went into the final product. 

Sora is impressive but we're not convinced

While OpenAI has done a decent job of showcasing what its text-to-video service can do through the large language model, the lack of transparency is worrying. 

Air Head is an impressive clip by a talented team, but it was subject to a ton of editing to get the final product to where it is in the short. 

It's not quite the one-click-and you-'re-done approach that many of the tech's boosters have represented it as. It turns out that it is merely a tool which could be used to enhance imagery instead of create from scratch, which is something that is already common enough in video production, making Sora seem less revolutionary than it first appeared.

You may also like



from TechRadar - All the latest technology news https://ift.tt/BkUNRx8

Thursday, April 25, 2024

TCL 50 XL 5G First Impressions: So Many Features for a $160 Phone - CNET

This is one of the cheapest phones thus far to include NFC for Google Pay and a 120Hz refresh rate display.

from CNET https://ift.tt/RaAK9Sj

Latest Tech News

Five years after LPDDR5 was first introduced, and a matter of months before JEDEC finalizes the LPDDR6 standard, Samsung has announced a new, faster version of its LPDDR5X DRAM.

When the South Korean tech giant debuted LPDDR5X back in October 2022, its natural successor to LPDDR5 ran at a nippy 8.5Gbps. This new chip runs at 10.7Gbps, over 11% faster than the 9.6Gbps LPDDR5T variant offered by its archrival, SK Hynix.

Samsung is building its new chips on a 12nm class process, which means the new DRAM isn’t only faster, but much smaller too – the smallest chip size for any LPDDR, in fact - making it ideal for use in on-device AI applications.

Improved power efficiency

“As demand for low-power, high-performance memory increases, LPDDR DRAM is expected to expand its applications from mainly mobile to other areas that traditionally require higher performance and reliability such as PCs, accelerators, servers and automobiles,” said YongCheol Bae, Executive Vice President of Memory Product Planning of the Memory Business at Samsung Electronics. “Samsung will continue to innovate and deliver optimized products for the upcoming on-device AI era through close collaboration with customers.”

Samsung's 10.7Gbps LPDDR5X boosts performance by over 25% and increases capacity by upward of 30%, compared to LPDDR5. Samsung says it also elevates the single package capacity of mobile DRAM to 32GB.

LPDDR5X offers several power-saving technologies, which bolster power efficiency by 25% and allow the chip to enter low-power mode for extended periods.

Samsung intends to begin mass production of the 10.7Gbps LPDDR5X DRAM in the second half of this year upon successful verification with mobile application processor (AP) and mobile device providers.

More from TechRadar Pro



from TechRadar - All the latest technology news https://ift.tt/0IRNpqD

Wednesday, April 24, 2024

Best Memory Foam Mattresses for 2024 - CNET

Memory foam mattresses are both comfortable and supportive. Here are the best memory foam mattresses, tested and reviewed by our sleep experts.

from CNET https://ift.tt/2ZmAIRY

Latest Tech News

Apple may again be partnering with silicon chip manufacturer TSMC to produce its own AI server processor, according to a leak from Chinese social network Weibo.

Yes, news of Apple’s next step into the world of artificial intelligence tools is unironically brought to you by, MacRumors reports, “the Weibo user known as ‘Phone Chip Expert’”, who suggests that the processor will be produced using TSMC’s state of the art 3 nanometer node.

As MacRumors points out, the Weibo user known as Phone Chip Expert has form, having correctly identified ahead of formal announcements that the iPhone 7 would be water resistant and that the A16 Bionic chip would be exclusive to the iPhone 14’s Pro variant.

Apple AI progress

The Weibo user known as Phone Chip Expert may well be about to strike again with their clairvoyant powers, but it’s unclear as to exactly when Apple would formally announce such an AI processor, let alone launch it commercially. 

In an increasingly AI-crazed world driven by data centers, it doesn’t surprise us that Apple are striving to be self-sufficient in its cloud computing processes. 

Apple is a behemoth large enough to run its own data centers, and as generative AI tools, such as Apple’s own upcoming on-device large language model (LLM), increasingly trickle down to B2B and consumer audiences, it may as well exert as much control and oversight as possible over how that processing is done.

It’s clear that Apple have designs in the ‘AI space’ (blech), and supposedly even have credible ideas about it might improve our lives, but neither we, you, or the Weibo user known as Phone Chip Expert will truly know what those are until, probably, the company’s Worldwide Developers Conference (WWDC) in June.

More from TechRadar Pro



from TechRadar - All the latest technology news https://ift.tt/mWG2XAO

Tuesday, April 23, 2024

Monday, April 22, 2024

Latest Tech News

OpenAI's new Sora text-to-video generation tool won't be publicly available until later this year, but in the meantime it's serving up some tantalizing glimpses of what it can do – including a mind-bending new video (below) showing what TED Talks might look like in 40 years.

To create the FPV drone-style video, TED Talks worked with OpenAI and the filmmaker Paul Trillo, who's been using Sora since February. The result is an impressive, if slightly bewildering, fly-through of futuristic conference talks, weird laboratories and underwater tunnels.

The video again shows both the incredible potential of OpenAI Sora and its limitations. The FPV drone-style effect has become a popular one for hard-hitting social media videos, but it traditionally requires advanced drone piloting skills and expensive kit that goes way beyond the new DJI Avata 2.

Sora's new video shows that these kind of effects could be opened up to new creators, potentially at a vastly lower cost – although that comes with the caveat that we don't yet know how much OpenAI's new tool itself will cost and who it'll be available to.

See more

But the video (above) also shows that Sora is still quite far short of being a reliable tool for full-blown movies. The people in the shots are on-screen for only a couple of seconds and there's plenty of uncanny valley nightmare fuel in the background.

The result is an experience that's exhilarating, while also leaving you feeling strangely off-kilter – like touching down again after a sky dive. Still, I'm definitely keen to see more samples as we hurtle towards Sora's public launch later in 2024.

How was the video made?

A video created by OpenAI Sora for TED Talks

(Image credit: OpenAI / TED Talks)

OpenAI and TED Talks didn't go into detail about how this specific video was made, but its creator Paul Trillo recently talked more broadly about his experiences of being one of Sora's alpha tester.

Trillo told Business Insider about the kinds of prompts he uses, including "a cocktail of words that I use to make sure that it feels less like a video game and something more filmic". Apparently these include prompts like "35 millimeter", "anamorphic lens", and "depth of field lens vignette", which are needed or else Sora will "kind of default to this very digital-looking output".

Right now, every prompt has to go through OpenAI so it can be run through its strict safeguards around issues like copyright. One of Trillo's most interesting observations is that Sora is currently "like a slot machine where you ask for something, and it jumbles ideas together, and it doesn't have a real physics engine to it".

This means that it's still a long way way off from being truly consistent with people and object states, something that OpenAI admitted in an earlier blog post. OpenAI said that Sora "currently exhibits numerous limitations as a simulator", including the fact that "it does not accurately model the physics of many basic interactions, like glass shattering".

These incoherencies will likely limit Sora to being a short-form video tool for some time, but it's still one I can't wait to try out.

You might also like



from TechRadar - All the latest technology news https://ift.tt/6B7pFAD

Sunday, April 21, 2024

What Do Your Dreams Mean? Sleep Experts Reveal Common Interpretations - CNET

Our weird and wacky dreams can be open to interpretation, but they might actually mean something. Here are common dream themes explained by sleep experts.

from CNET https://ift.tt/Wn8e7sI

Which Size Heat Pump Is Right for Your Home? - CNET

Improve your heat pump's quality and efficiency by ensuring you install the proper size pump for your home.

from CNET https://ift.tt/sEjpLGQ

Latest Tech News

AMD is introducing two new adaptive SoCs - Versal AI Edge Series Gen 2 for AI-driven embedded systems, and Versal Prime Series Gen 2 for classic embedded systems.

Multi-chip solutions typically come with significant overheads but single hardware architecture isn’t fully optimized for all three AI phases - preprocessing, AI inference, and postprocessing. 

To tackle these challenges, AMD has developed a single-chip heterogeneous processing solution that streamlines these processes and maximizes performance.

Early days yet

The Versal AI Edge Series Gen 2 adaptive SoCs provide end-to-end acceleration for AI-driven embedded systems, which the tech giant says is built on a foundation of improved safety and security. AMD has integrated a high-performance processing system, incorporating Arm CPUs and next-generation AI Engines, with top-class programmable logic, creating a device that expertly handles all three computational phases required in embedded AI applications.

AMD says the Versal AI Edge Series Gen 2 SoCs are suitable for a wide spectrum of embedded markets, including those with high-security, high-reliability, long lifecycle, and safety-critical demands. Purposes include autonomous driving, industrial PCs, autonomous robots, edge AI boxes and ultrasound, endoscopy and 3D imaging in health care.

The processing system of the integrated CPUs includes up to 8x Arm Cortex-A78AE application processors, up to 10x Arm Cortex-R52 real-time processors, and support for USB 3.2, DisplayPort 1.4, 10G Ethernet, PCIe Gen5, and more.

The devices meet ASIL D / SIL 3 operating requirements and are compliant with a range of other safety and security standards. They reportedly offer up to three times the TOPS/watt for AI inference and up to ten times the scalar compute with powerful CPUs for postprocessing.

Salil Raje, senior vice president of AMD’s Adaptive and Embedded Computing Group, said, “The demand for AI-enabled embedded applications is exploding and driving the need for solutions that bring together multiple compute engines on a single chip for the most efficient end-to-end acceleration within the power and area constraints of embedded systems. Backed by over 40 years of adaptive computing leadership in high-security, high-reliability, long-lifecycle, and safety-critical applications, these latest generation Versal devices offer high compute efficiency and performance on a single architecture that scales from the low-end to high-end.”

Early access documentation and evaluation kits for the devices are available now. The first silicon samples of Versal Series Gen 2 are expected at the start of next year, with production slated to begin late 2025.

More from TechRadar Pro



from TechRadar - All the latest technology news https://ift.tt/qnF3WtI

Saturday, April 20, 2024

Prime Video: The 32 Absolute Best TV Shows to Watch - CNET

Here are some highly rated series to try, plus a look at what's new in April.

from CNET https://ift.tt/ynGktLq

Beat the Sneezes: Tips and Apps to Tackle Seasonal Allergies Head-On - CNET

Allergens can be found in spots you don't often think about. Follow these tips to beat seasonal allergies like a pro.

from CNET https://ift.tt/ohifNV7

Latest Tech News

We're set to hear much more about what's coming with macOS 15 when Apple's annual Worldwide Developers Conference (WWDC) gets underway on June 10 – and one app in particular is rumored to be getting a major upgrade.

That app is the Calculator app, and while it perhaps isn't the most exciting piece of software that Apple makes, AppleInsider reckons the upcoming upgrade is "the most significant upgrade" the app has been given "in years".

It's so substantial, it's got its own codename: GreyParrot (that's said to be a nod towards the African grey parrot, known for its cognitive abilities). Part of the upgrade will apparently include the Math Notes feature we've already heard about in relation to a Notes app upgrade due in iOS 18.

It sounds as though Math Notes is going to make it easier to ferry calculations between the Notes and the Calculator apps. A new sidebar showing the Calculator history is reported to be on the way too. This might well get its own button on the app, AppleInsider says.

Currency conversions

Calculator for macOS

Currency conversions currently require a pop-up dialog (Image credit: Future)

A visual redesign is also apparently on the way, with "rounded buttons and darker shades of black" to match the iOS Calculator. Users will also be able to resize the Calculator app window, with the buttons resizing accordingly, which isn't currently possible.

Unit conversion is going to be made more intuitive and easier to access, AppleInsider says, with no need to open up the menus to select conversion types – at the moment, it's necessary to select currencies in a pop-up dialog.

The thinking is that Apple wants to better compete with apps such as OneNote from Microsoft, and the third-party Calcbot app for macOS. It's been a long time since the Calculator app was changed in any way, and its rather basic feature set means it's lagging behind other alternatives.

According to AppleInsider, there's no guarantee that Apple will go through with this Calculator upgrade, but it seems likely. Expect to hear much more about macOS 15, iOS 18, and Apple's other software products at WWDC 2024 on June 10.

You might also like



from TechRadar - All the latest technology news https://ift.tt/FGuKTqy

Friday, April 19, 2024

Earth Day Deals 2024: Save Some Green on Eco-Friendly Tech and Home Products - CNET

Earth Day is on April 22, so you only have a couple more days to take advantage of these environmentally conscious offers.

from CNET https://ift.tt/kXRiD0p

Latest Tech News

It’s been rumored for a while now that Google is considering charging users for AI powered results, especially concerning the idea of a premium search option which leverages generative AI.

Whether that will happen remains to be seen, but Google is ending the era of free access to its Gemini API, signaling a new financial strategy within its AI development.

Developers previously enjoyed free access to lure them towards Google’s AI products and away from OpenAI’s, but that is set to change. OpenAI was first to market and has already monetized its APIs and LLM access. Now Google is planning to emulate this through its cloud and AI Studio services, and it seems the days of unfettered free access are numbered.

RIP PaLM API

In an email to developers, Google said it was shutting down access to its PaLM API (the pre-Gemini model which was used to build custom chatbots) to developers via AI Studio on August 15. This API was deprecated back in February.

The tech giant is hoping to convert free users into paying customers by promoting the stable Gemini 1.0 Pro. “We encourage testing prompts, tuning, inference, and other features with stable Gemini 1.0 Pro to avoid interruptions," The email reads. “You can use the same API key you used for the PaLM API to access Gemini models through Google AI SDKs.”

Pricing for the paid plan begins at $7 for one million input tokens and rises to $21 for the same number of output tokens.

There is one exception to Google’s plans - PaLM and Gemini will remain accessible to customers paying for Vertex AI in Google Cloud. However, as HPCWire points out, “Regular developers on cheaper budgets typically use AI Studio as they cannot afford Vertex.”

More from TechRadar Pro



from TechRadar - All the latest technology news https://ift.tt/yjbUn3t

Latest Tech News

Nvidia acquires SchedMD and launches Nemotron 3 open models, providing datasets, AI tools, and libraries for multi-agent workflows. from L...