Sunday, November 3, 2024

Best Christmas Decorations on Amazon in 2024

We scoured Amazon and found all the best Christmas decorations, from lights to trees and everything in between. All it takes is a few clicks to make your season merry and bright.

from CNET https://ift.tt/RDuK7Zg

Latest Tech News

Over 27 million tons of single-use polystyrene packaging are produced worldwide each year, yet only 12% is recycled - most ends up in landfills after its initial use.

Researchers at RMIT University and Riga Technical University have developed an innovative way to generate electricity using waste polystyrene, addressing both energy needs and the environmental impact of the ubiquitous packaging material.

The invention repurposes discarded polystyrene into a device that generates static electricity from motion, such as wind or airflow. The device is a thin patch, made from multiple layers of polystyrene, each around "one-tenth the thickness of a human hair," according to lead researcher Dr. Peter Sherrell, who went on to explain, “We can produce this static electricity just from air blowing on the surface of our clever patches, then harvest that energy.”

Producing electricity consistently

The patch, which can capture turbulent airflow from air conditioning units, could reduce energy demand by up to 5% and lower the carbon footprint of these systems. Tests show the device can reach up to 230 volts, comparable to household voltage but at a lower power level.

Sherrell noted, “The biggest numbers come from a compression and separation, where you've got faster speeds and bigger motion, while smaller motion generates less energy. This means that in addition to air conditioners, integrating our patches in high traffic areas such as underground walkways could supplement local energy supply without creating additional demand on the grid."

The device’s longevity stems from the same properties that make polystyrene slow to decompose. “The great thing here is the same reason that it takes 500 years for polystyrene to break down in landfill makes these devices really stable – and able to keep making electricity for a long time,” Sherrell said.

This process involves learning how to modify plastics to optimize their energy-generating potential: “We've studied which plastic generates more energy and how when you structure it differently – make it rough, make it smooth, make it really thin, make it really fat – how that changes all this charging phenomenon.”

This static electricity generation project is part of the team’s ongoing research into triboelectric nanogenerators, as published in Advanced Energy and Sustainability Research. RMIT has filed a provisional patent for its device and is now looking for industry partners to help develop the technology for commercial applications.

More from TechRadar Pro



from TechRadar - All the latest technology news https://ift.tt/yY7WKHC

Saturday, November 2, 2024

Best Massage Guns for 2024

Soothe those achy muscles with one of the best massage guns on the market. CNET tested top brands to find the ones worth your money.

from CNET https://ift.tt/ce2OjPy

Latest Tech News

From streamlining operations to automating complex processes, AI has revolutionized how organizations approach tasks - however, as the technology becomes more prevalent, organizations are discovering the rush to embrace AI may come with unintended consequences.

A report by Swimlane reveals while AI offers tremendous benefits, its adoption has outpaced many companies' ability to safeguard sensitive data. As businesses deeply integrate AI into their operations, they must also contend with the associated risks, including data breaches, compliance lapses, and security protocol failures.

AI works with Large Language Models (LLMs) which are trained using vast datasets that often include publicly available information. These datasets can consist of text from sources like Wikipedia, GitHub, and various other online platforms, which provide a rich corpus for training the models. This means that if a company’s data is available online, it will likely be used for training LLMs.

Data handling and public LLMs

The study revealed a gap between protocol and practice when sharing data in large public language models (LLMs). Although 70% of organizations claim to have specific protocols to safeguard the sharing of sensitive data with public LLMs, 74% of respondents are aware that individuals within their organizations still input sensitive information into these platforms.

This discrepancy highlights a critical flaw in enforcement and employee compliance with established security measures. Furthermore, there is a constant barrage of AI-related messaging which is wearing down professionals and 76% of respondents agree that the market is currently saturated with AI-related hype.

This overexposure is causing a form of AI fatigue and over half (55%) of those surveyed reported feeling overwhelmed by the persistent focus on AI, signalling that the industry may need to shift its approach to promoting the technology.

Interestingly, despite this fatigue, experience with AI and machine learning (ML) technologies is becoming a crucial factor in hiring decisions. A striking 86% of organizations reported that familiarity with AI plays a significant role in determining the suitability of candidates. This shows how ingrained AI is becoming, not just in cybersecurity tools but in the workforce needed to manage them.

In the cybersecurity sector, AI and LLMs have had a positive impact, as the report claims 89% of organizations credit AI technologies for boosting the efficiency of their cybersecurity teams.

More from TechRadar Pro



from TechRadar - All the latest technology news https://ift.tt/42I9xuN

Best Internet and TV Bundles for November 2024

Sure bundling internet and TV is convenient, but it could also save you some money. I'd recommend these internet and TV bundles most.

from CNET https://ift.tt/EXJNQyt

Friday, November 1, 2024

Play Death Note, Hot Wheels and More in November on PlayStation Plus

All PS Plus subscribers can access these games on Nov. 5.

from CNET https://ift.tt/Zv6eTCq

Latest Tech News

AI chatbot Claude is now available as a desktop app for both Windows and Mac computers. The public beta for the apps is available for free users as well as subscribers to the premium version of the AI chatbot. Claude creator Anthropic describes the desktop versions of Claude as "fast, focused, and designed for deep work," implying that those who want to use Claude at home without opening a browser will find it just as helpful as going online or to the Claude mobile app.

One way the desktop app is more efficient is by having a keyboard shortcut to open Claude. After installing the app, you can press Ctrl + Alt + Space to access the AI. That's a boon if you have a lot of other tasks running and don't want to navigate to the website.

The other major benefit of the desktop app is that it frees you from relying solely on mobile devices or web browsers to access your conversations with Claude. You could start the chat on your smartphone, then open the desktop app when you get home or vice versa, with a visit to the website if you're at a public library or similar spot. This continuity can help speed up all kinds of Claude-based projects.

Anthropic also debuted a small upgrade to the Claude on mobile apps: native dictation. You can record up to 10 minutes of audio that Claude will transcribe and then respond to in text form on the app. It's not a full-on voice interactive feature, but it does mean you can at least submit prompts to the AI chatbot without typing.

AI at home

The desktop version of Claude uses Anthropic's latest AI model, Claude 3.5 Sonnet, but it can't do everything the web version does. In particular, it lacks the new Computer Use feature that lets Claude control your cursor and type on your behalf. That's not too much of a surprise since Computer Use and the desktop apps are still in beta. Presumably, the feature will arrive when both are more mature.

Anthropic's timing in releasing the Claude desktop apps is interesting as it is part of a sudden flurry from rival AI chatbots. Both OpenAI's ChatGPT and Perplexity AI have introduced desktop apps in recent weeks. They each have some variation of the web version of their respective chatbots, with many, though not all of the same features. The appeal of a more convenient and accessible AI chatbot is obvious.

That's why Microsoft embedded its Copilot AI directly into the Windows 11 operating system. All of the AI chatbot developers want to encourage current and potential users to stick with their products regardless of where they are or what they are doing. It's going to be another central frontier for the industry, just like mobile apps a decade ago.

You Might Also Like



from TechRadar - All the latest technology news https://ift.tt/SoQYDrq

Thursday, October 31, 2024

The 4 Best Waterproof Mattress Protectors of 2024, as Tested by Our Experts

Keep your entire mattress clean with these waterproof mattress covers.

from CNET https://ift.tt/EzWA1XC

Latest Tech News

We’ve been hearing rumors about a ChatGPT search engine for a while now, but now it's finally live. Rather than being a whole new website called 'SearchGPT', as many had expected, it’s simply an upgrade to the existing ChatGPT website, and all the ChatGPT apps for Windows, Mac, and smartphones.

When you are talking to ChatGPT it will now ask you if it should search the web, if it feels that would produce better results for you, but you can also manually instigate a web search at any time. As you’d expect, ChatGPT search is a feature that’s available immediately for ChatGPT Plus subscribers, and all ChatGPT Plus and Team users will get it today. However, all SearchGPT waitlist users will also be getting access today. Enterprise and Edu users of ChatGPT will be getting access over the next few weeks.

A laptop screen showing ChatGPT search examples

A new Citations bar will open on the right of the window when you click a source link. (Image credit: OpenAI)

How it works

If you look at your ChatGPT prompt bar you’ll see a new search icon called Search. Tap or click this and you will be searching the web using ChatGPT, rather than engaging in a conversation. It’s a bit like the AI summaries that Google already provides in its search engine, but there is an easily identifiable link to sources after each piece of text. When clicked, the sources open up a sidebar that shows citations.

In case you were wondering, the waitlist for SearchGPT is now closed, so if you haven’t already signed up, it’s now too late. As for when the rest of the ChatGPT free tier will get it, OpenAI says, “We’ll roll out to all Free users over the coming months.”

A laptop screen showing ChatGPT search examples

ChatGPT search is perfect for all the jobs you'd normally use Google for. (Image credit: OpenAI)

What's interesting is that ChatGPT has partnered with various industry sources to provide its own maps, that aren't Google Maps, as well as weather, stocks, sports, and news information. OpenAI says it has "partnered with news and data providers to add up-to-date information and new visual designs for categories like weather, stocks, sports, news, and maps."

ChatGPT Search is already looking enticing and could be the first real threat to Google in years. With ChatGPT Search, you’re essentially getting the natural language capabilities of ChatGPT blended with up-to-the-minute information from the web, and that could be just what people are searching for.

You might also like...



from TechRadar - All the latest technology news https://ift.tt/QTzJcAZ

Wednesday, October 30, 2024

Best Online Banks for October 2024

These online banks provide the best of banking: great rates, low fees and innovative technology.

from CNET https://ift.tt/KaBomJH

Best Early Black Friday Deals Under $25: Stock Up on Everyday Essentials

Pricey gadgets aren't the only thing on sale for Black Friday. There are also plenty of affordable bargains on household basics.

from CNET https://ift.tt/RG5FLJY

Latest Tech News

Bandwidth limitations have become a significant bottleneck in AI and high-performance computing (HPC), as GPUs are underutilized due to bandwidth constraints, with nearly half of their computational power going to waste.

Nvidia is not expected to release optical interconnects for its NVLink protocol until the "Rubin Ultra" GPU compute engine launches in 2027.

This delay has led hyperscalers and cloud builders to explore ways to leapfrog Nvidia’s technology by adopting optical interconnects earlier.

Introducing ChromX

Xscape Photonics, an optical interconnect company spun out of research at Columbia University, is using photonics to realize scalable, high-bandwidth, energy-sustainable, and cost-effective solutions to enable the next generation of AI, ML, and simulation hardware.

This could help the AI industry save billions of dollars in wasted GPU capacity while also offering a path to greener, more sustainable AI infrastructures.

The Next Platform recently took a closer look at Xscape Photonics and spoke with the team behind it, including CEO Vivek Raghunathan, a former MIT researcher and Intel engineer.

Raghunathan highlighted the inefficiencies of current GPU systems, explaining that as scaling continues, the problem shifts "from GPU device-level performance to a system-level networking problem."

This is where Xscape’s technology comes into play. By converting electrical signals into optical ones directly within the GPU, Xscape can dramatically increase bandwidth while simultaneously reducing power consumption.

The startup’s solution, called the "ChromX" platform, uses a laser that can transmit multiple wavelengths of light simultaneously through a single optical fiber - up to 128 different wavelengths (or "colors"). This enables a 32-fold increase in bandwidth compared to lasers that use only four wavelengths.

The ChromX platform also relies on simpler modulation schemes like NRZ (Non-Return-to-Zero), which reduce latency compared to higher-order schemes like PAM-4 used in other systems such as InfiniBand and Ethernet. The ChromX platform is programmable, allowing it to adjust the number of wavelengths to match the specific needs of an AI workload, whether for training or inference tasks.

Raghunathan told The Next Platform’s Timothy Prickett Morgan, “The vision is to match in-package communication bandwidth to off-package communication escape bandwidth. And we think when we use our multicolor approach, we can match that so that giant datacenters - or multiple datacenters - behave as one big GPU.”

The potential impact of this technology is enormous. AI workloads consume vast amounts of energy, and with data center demand projected to triple by 2035, power grids may struggle to keep up. Xscape Photonics’ innovations could offer a vital solution, enabling AI systems to operate more efficiently and sustainably.

More from TechRadar Pro



from TechRadar - All the latest technology news https://ift.tt/wD9LrH2

Tuesday, October 29, 2024

TP-Link Deco X90 Mesh Router Review: Top Speeds at a Great Discount

I used to only recommend the Deco X90 as an upgrade pick, but it's come down significantly in price.

from CNET https://ift.tt/9vM0FwB

Latest Tech News

A leading expert has raised critical questions about the validity of claims surrounding "Zettascale" and "Exascale-class" AI supercomputers.

In an article that delves deep into the technical intricacies of these terms, Doug Eadline from HPCWire explains how terms like exascale, which traditionally denote computers achieving one quintillion floating-point operations per second (FLOPS), are often misused or misrepresented, especially in the context of AI workloads.

Eadline points out that many of the recent announcements touting "exascale" or even "zettascale" performance are based on speculative metrics, rather than tested results. He writes, "How do these 'snort your coffee' numbers arise from unbuilt systems?" - a question that highlights the gap between theoretical peak performance and actual measured results in the field of high-performance computing. The term exascale has historically been reserved for systems that achieve at least 10^18 FLOPS in sustained, double-precision (64-bit) calculations, a standard verified by benchmarks such as the High-Performance LINPACK (HPLinpack).

Car comparison

As Eadline explains, the distinction between FLOPS in AI and HPC is crucial. While AI workloads often rely on lower-precision floating-point formats such as FP16, FP8, or even FP4, traditional HPC systems demand higher precision for accurate results.

The use of these lower-precision numbers is what leads to inflated claims of exaFLOP or even zettaFLOP performance. According to Eadline, "calling it 'AI zetaFLOPS' is silly because no AI was run on this unfinished machine."

He further emphasizes the importance of using verified benchmarks like HPLinpack, which has been the standard for measuring HPC performance since 1993, and how using theoretical peak numbers can be misleading.

The two supercomputers that are currently part of the exascale club - Frontier at Oak Ridge National Laboratory and Aurora at Argonne National Laboratory - have been tested with real applications, unlike many of the AI systems making exascale claims.

To explain the difference between various floating-point formats, Eadline offers a car analogy: "The average double precision FP64 car weighs about 4,000 pounds (1814 Kilos). It is great at navigating terrain, holds four people comfortably, and gets 30 MPG. Now, consider the FP4 car, which has been stripped down to 250 pounds (113 Kilos) and gets an astounding 480 MPG. Great news. You have the best gas mileage ever! Except, you don’t mention a few features of your fantastic FP4 car. First, the car has been stripped down of everything except a small engine and maybe a seat. What’s more, the wheels are 16-sided (2^4) and provide a bumpy ride as compared to the smooth FP64 sedan ride with wheels that have somewhere around 2^64 sides. There may be places where your FP4 car works just fine, like cruising down Inference Lane, but it will not do well heading down the FP64 HPC highway."

Eadline’s article serves as a reminder that while AI and HPC are converging, the standards for measuring performance in these fields remain distinct. As he puts it, "Fuzzing things up with 'AI FLOPS' will not help either," pointing out that only verified systems that meet the stringent requirements for double-precision calculations should be considered true exascale or zettascale systems.

More from TechRadar Pro



from TechRadar - All the latest technology news https://ift.tt/bKkBHQa

Monday, October 28, 2024

How to Turn Off the Most Annoying Apple Intelligence Feature on iOS 18.1

Everyone else is raving about this new AI feature on the iPhone, but it continues to vex me. And if I don't like it, maybe you don't either.

from CNET https://ift.tt/as9pP6Y

Latest Tech News

They say fortune favors the bold, so why not rebel from cookie-cutter colorways and mix things up with some eye-catching tech instead? As a...