As cryptocurrency tumbles, prices for new and used GPUs continue to fall

AMD's Radeon RX 6800 and 6800 XT.

Enlarge / AMD’s Radeon RX 6800 and 6800 XT. (credit: Sam Machkovech)

Cryptocurrency has had a rough year. Bitcoin has fallen by more than 50 percent since the start of the year, from nearly $48,000 in January to just over $20,000 as of publication. Celsius, a major cryptocurrency “bank,” suspended withdrawals earlier this week, and the Coinbase crypto exchange announced a round of layoffs this past Tuesday after pausing hiring last month.

It may be small comfort to anyone who wanted to work at Coinbase or spent hard-earned money on an ugly picture of an ape because a celebrity told them to, but there’s some good news for PC builders and gamers in all of this. As tracked by Tom’s Hardware, prices for new and used graphics cards continue to fall, coming down from their peak prices in late 2021 and early 2022. For weeks, it has generally been possible to go to Amazon, Newegg, or Best Buy and buy current-generation GPUs for prices that would have seemed like bargains six months or a year ago, and pricing for used GPUs has fallen further.

As Tom’s Hardware reports, most mid-range Nvidia GeForce RTX 3000-series cards are still selling at or slightly over their manufacturer-suggested retail prices—the 3050, 3060, and 3070 series are all still in high demand. But top-end 3080 Ti, 3090, and 3090 Ti GPUs are all selling below their (admittedly astronomical) MSRPs right now, as are almost all of AMD’s Radeon RX 6000 series cards.

Read 3 remaining paragraphs | Comments

#amd, #gpu, #nvidia, #tech

A PC monitor with a 500 Hz refresh rate is coming from Asus

A PC monitor with a 500 Hz refresh rate is coming from Asus

Enlarge (credit: Nvidia)

A 24-inch PC monitor with the ability to update its image 500 times per second will be available soon, Asus and Nvidia announced Tuesday. The monitor should boost desktop monitors from the 360 Hz max native refresh rate they see today while putting a mysterious new spin on an old panel technology.

Aptly named the Asus ROG Swift 500 Hz Gaming Monitor, it manages high refresh rates with lower resolution. The 1920×1080 screen leverages a new take on TN (twisted nematic) panels called E-TN, with the “E” standing for esports. According to Asus, the E-TN panel offers “60 percent better response times than standard TN panels,” and in its own announcement, Nvidia claimed the E-TN panel brings “maximum motion and clarity.” But neither detailed how the technology differs from regular TN.

Standard TN panels have been becoming less common among PC monitor releases as IPS (in-plane switching) and VA (vertical alignment) panels continue catching up in speed while being known for stronger viewing angles, in the case of IPS, and larger contrast ratios, in the case of VA. Those opting for TN are willing to sacrifice some image quality in the name of speed or, often, lower prices. It’s unclear how much sacrifice E-TN may require (besides a max resolution of 1080p) or how much of a premium it’ll have compared to today’s standard TN monitors.

Read 9 remaining paragraphs | Comments

#asus, #monitors, #nvidia, #tech

Next-gen Nvidia RTX 4000-series GPUs are reportedly coming in the next few months

Nvidia's "Lovelace" RTX 4000 GPUs will be faster than the top-end RTX 3090 Ti.

Enlarge / Nvidia’s “Lovelace” RTX 4000 GPUs will be faster than the top-end RTX 3090 Ti. (credit: Nvidia)

It has been nearly two years since Nvidia introduced its Ampere GPU architecture in the GeForce RTX 3080, and the company is reportedly gearing up to announce its replacement. Tom’s Hardware reports, based on tweets from a normally reliable leaker, that the RTX 4000-series and its Lovelace GPU architecture will begin rolling out early in Q3 of this year.

It has been so difficult to buy Nvidia’s RTX 3000-series GPUs for so long that it feels almost too soon to be talking about their replacements, though there was a similar two-year-ish gap between the first RTX 2000 GPUs and the RTX 3000 series. The difference is in how long it took Ampere to trickle all the way down to the bottom of the lineup. The Turing architecture debuted in September 2018 and had made its way down to the low-end GeForce GTX 1650 by April 2019; the first Ampere cards appeared in September 2020 but didn’t come to the GeForce RTX 3050 until January 2022.

Other reports from the same source suggest that the RTX 4000 GPU could be a big boost over the top-end RTX 3090 Ti, stepping up from 84 of Nvidia’s streaming multiprocessors (SMs) to somewhere between 126 and 140 SMs. The supposed RTX 4090 will come with 24GB of GDDR6 RAM (the same amount as the RTX 3090 and 3090 Ti) and is said to roughly double the RTX 3090’s performance within the same 450 W power envelope. Whether any of these performance claims are true remains to be seen—Nvidia’s GPUs do typically offer impressive performance bumps between generations, but double the performance in the same power envelope would be an anomalously large jump, historically speaking.

Read 2 remaining paragraphs | Comments

#gaming-culture, #nvidia, #tech

Nvidia takes first step toward open source Linux GPU drivers

The RTX 3080 Ti.

Enlarge / The RTX 3080 Ti. (credit: Sam Machkovech)

After years of hinting, Nvidia announced yesterday that it would be open-sourcing part of its Linux GPU driver, as both Intel and AMD have done for years now. Previously, Linux users who wanted to avoid Nvidia’s proprietary driver had to rely on reverse-engineered software like the Nouveau project, which worked best on older hardware and offered incomplete support at best for all of Nvidia’s GPU features.

“This release is a significant step toward improving the experience of using NVIDIA GPUs in Linux, for tighter integration with the OS, and for developers to debug, integrate, and contribute back,” says a blog post attributed to several Nvidia employees. “For Linux distribution providers, the open source modules increase ease of use. They also improve the out-of-the-box user experience to sign and distribute the NVIDIA GPU driver. Canonical and SUSE are able to immediately package the open kernel modules with Ubuntu and SUSE Linux Enterprise Distributions.”

Nvidia is specifically releasing an open source kernel driver under a dual MIT/GPL license and is not currently open-sourcing parts of the driver that run in user space. This includes drivers for OpenGL, Vulkan, OpenCL, and CUDA, which are still closed source, in addition to the firmware for the GPU System Processor (GSP). Nvidia says these drivers “will remain closed source and published with pre-built binaries,” so it doesn’t sound like there are immediate plans to release open source versions.

Read 5 remaining paragraphs | Comments

#gaming-culture, #linux, #nvidia, #tech

NiceHash defeats Nvidia’s GPU crypto-mining limits, does not appear to be a scam

NiceHash defeats Nvidia’s GPU crypto-mining limits, does not appear to be a scam

Enlarge (credit: BTC Keychain)

Nvidia began releasing LHR (or “Lite Hash Rate”) graphics cards last year to slow down their cryptocurrency mining performance and make them less appealing to non-gamers. Late last week, crypto-mining platform NiceHash announced that it had finally found a way around those limitations and released an update for its QuickMiner software that promises full Ethereum mining performance on nearly all of the LHR-enabled GeForce RTX 3000-series GPUs.

Unlike past attempts to disable the LHR protections, NiceHash’s workaround appears to be the real deal—Tom’s Hardware was able to confirm the performance boosts using QuickMiner and a GeForce RTX 3080 Ti.

For now, NiceHash says that the LHR workaround will only work in Windows, with “no Linux support yet.” The more flexible NiceHash Miner software doesn’t include the workarounds yet, though it will soon. NiceHash also says that the software won’t accelerate mining performance on newer GeForce cards that use version 3 of the LHR algorithm, a list that (for now) includes the RTX 3050 and the 12GB version of the RTX 3080 but which will presumably grow as Nvidia releases new GPUs and updated revisions for older GPUs.

Read 3 remaining paragraphs | Comments

#ethereum, #gaming-culture, #geforce, #lhr, #nvidia, #tech

SEC fines Nvidia $5.5M for misleading investors about GPU sales to crypto miners

SEC fines Nvidia $5.5M for misleading investors about GPU sales to crypto miners

Enlarge (credit: Getty Images)

Nvidia has agreed to pay $5.5 million in fines to the United States Securities and Exchange Commission to settle charges that it failed to disclose how many of its GPUs were being sold for cryptocurrency mining, the agency announced today.

These charges are unrelated to the current (slowly ebbing) crypto-driven GPU shortage. Rather, they deal with a similar but smaller crypto-driven bump in GPU sales back in 2017.

The agency’s full order (PDF) goes into more detail. During its 2018 fiscal year, Nvidia reported increases in its GPU sales but did not disclose that those sales were being driven by cryptocurrency miners. The SEC alleges that Nvidia knew these sales were being driven by the relatively volatile cryptocurrency market and that Nvidia didn’t disclose that information to investors, misleading them about the company’s prospects for future growth.

Read 3 remaining paragraphs | Comments

#biz-it, #gaming-culture, #nvidia, #sec

Things aren’t “back to normal” yet, but GPU prices are steadily falling

The RTX 3080 Ti.

Enlarge / The RTX 3080 Ti. (credit: Sam Machkovech)

Graphics card prices remain hugely inflated compared to a few years ago, but the good news is that things finally seem to be getting consistently better and not worse.

To quantify this, Jarred Walton at Tom’s Hardware and analyst Jon Peddie pulled together data on current and historical GPU pricing. The only card consistently tracking close to its manufacturer-suggested retail price of $199 is the harshly reviewed AMD Radeon RX 6500 XT, which is currently selling for $220, according to Peddie’s data, and $237 according to Walton’s. But across the board, prices are way down from their 2021 peaks.

Data from Graphic Speak's Jon Peddie, comparing the current and peak prices for a handful of current-generation GPUs. Note that the RTX 3050 and RTX 6500 XT launched in early 2022; their prices were never as inflated as some of the higher-end models.

Data from Graphic Speak’s Jon Peddie, comparing the current and peak prices for a handful of current-generation GPUs. Note that the RTX 3050 and RTX 6500 XT launched in early 2022; their prices were never as inflated as some of the higher-end models. (credit: Graphic Speak)

Pricing for Nvidia’s RTX 3080 demonstrates where the market sits right now—the card is currently selling for between $1,200 and $1,300 on average, and you can buy some models on retail sites like Newegg for as low as $1,000. The cost is still way up from the card’s MSRP of $699, but it’s down nearly a third from its peak price of $1,800.

Read 4 remaining paragraphs | Comments

#amd, #geforce, #gpu, #gpu-shortage, #intel-arc, #nvidia, #radeon, #tech

AMD announces FSR upscaling 2.0, promises big, hardware-agnostic gains

AMD announces FSR upscaling 2.0, promises big, hardware-agnostic gains

Enlarge (credit: AMD)

When the PC industry’s two biggest graphics card manufacturers aren’t battling over benchmarks or chip shortage woes, they’ve been fighting over a different sales pitch: boosting performance for older GPUs. And while Nvidia has largely won that war, that has come with an asterisk of a proprietary performance-boosting system, DLSS, that requires relatively recent Nvidia hardware.

AMD’s first major retaliatory blow came in the form of 2020’s FidelityFX Super Resolution, but this open source, hardware-agnostic option has thus far proven inadequate. And AMD finally seems ready to admit as much in its rollout of FSR 2.0, which debuted in limited fashion on Wednesday ahead of a wider Game Developers Conference reveal next week and a formal rollout in video games starting “Q2 2022.”

It’s time for temporal solutions

Both FSR and DLSS function in modern games as pixel upscalers. In both cases, games run at a lower base resolution of pixels, and then, whichever system is active processes and reconstructs the resulting imagery at higher pixel counts. This can include intelligent anti-aliasing (to reduce “stair-stepping” of diagonal lines), blurring, or even wholly redrawn pixels. Ultimately, the dream is that these systems can wisely convert games running at 1440p or even 1080p to something nearly identical to a full 4K signal.

Read 7 remaining paragraphs | Comments

#amd, #amd-fidelityfx-super-resolution, #deep-learning-super-sampling, #fidelityfx-super-resolution, #fsr, #gaming-culture, #nvidia, #nvidia-dlss

Nvidia wants to speed up data transfer by connecting data center GPUs to SSDs 

Nvidia wants to speed up data transfer by connecting data center GPUs to SSDs 

Enlarge (credit: Getty Images)

Microsoft brought DirectStorage to Windows PCs this week. The API promises faster load times and more detailed graphics by letting game developers make apps that load graphical data from the SSD directly to the GPU. Now, Nvidia and IBM have created a similar SSD/GPU technology, but they are aiming it at the massive data sets in data centers.

Instead of targeting console or PC gaming like DirectStorage, Big accelerator Memory (BaM) is meant to provide data centers quick access to vast amounts of data in GPU-intensive applications, like machine-learning training, analytics, and high-performance computing, according to a research paper spotted by The Register this week. Entitled “BaM: A Case for Enabling Fine-grain High Throughput GPU-Orchestrated Access to Storage” (PDF), the paper by researchers at Nvidia, IBM, and a few US universities proposes a more efficient way to run next-generation applications in data centers with massive computing power and memory bandwidth.

BaM also differs from DirectStorage in that the creators of the system architecture plan to make it open source.

Read 4 remaining paragraphs | Comments

#biz-it, #ibm, #nvidia, #ssd, #tech

Cybercriminals who breached Nvidia issue one of the most unusual demands ever

Close-up photograph of high-end computer component.

Enlarge (credit: Getty Images)

Data extortionists who stole up to 1 terabyte of data from Nvidia have delivered one of the most unusual ultimatums ever in the annals of cybercrime: allow Nvidia’s graphics cards to mine cryptocurrencies faster or face the imminent release of the company’s crown-jewel source code.

A ransomware group calling itself Lapsus$ first claimed last week that it had hacked into Nvidia’s corporate network and stolen more than 1TB of data. Included in the theft, the group claims, are schematics and source code for drivers and firmware. A relative newcomer to the ransomware scene, Lapsus$ has already published one tranche of leaked files, which among other things included the usernames and cryptographic hashes for 71,335 of the chipmaker’s employees.

The group then went on to make the highly unusual demand: remove a feature known as LHR, short for “Lite Hash Rate,” or see the further leaking of stolen data.

Read 7 remaining paragraphs | Comments

#biz-it, #lapsus, #nvidia, #ransomware

Weeks after announcing it, Nvidia has gone silent on its flagship RTX 3090 Ti

Nvidia showed off the RTX 3090 Ti at CES 2022 in January.

Enlarge / Nvidia showed off the RTX 3090 Ti at CES 2022 in January.

At CES back in January, Nvidia announced two new desktop graphics cards. One of them, the lower-midrange RTX 3050, got pricing and a release date, and the reviews have already come and gone. The other, the tippy-top-end RTX 3090 Ti, had some of its specs announced, but the company said it would have more specifics “by the end of the month.”

But we’re now halfway into February, and the company still doesn’t have any news to share. An Nvidia spokesperson told The Verge that the company does not “currently have more info to share” on the speedy-but-almost-certainly-pricey flagship GPU.

This follows reports from mid-January that the company and its partners had halted production on the 3090 Ti due to alleged issues with the GPU’s BIOS and the hardware itself. Whether fixes can be applied to GPUs that have already been manufactured is unclear, but if the GPU die itself needs to be revised in some way, limited manufacturing capacity amid the ongoing global chip shortage could cause substantial delays.

Read 1 remaining paragraphs | Comments

#gaming-culture, #nvidia, #tech

Intel’s strategy for outflanking Arm takes shape with bet on RISC-V

Intel’s strategy for outflanking Arm takes shape with bet on RISC-V

Enlarge (credit: ony Avelar/Bloomberg)

Many of Intel’s current woes can be traced to the fact that the company was left out of the iPhone. Whether Intel passed on the opportunity or couldn’t meet the spec is by now a moot point, but missing out on the smartphone revolution—and its billions of chips—played no small part in the company falling behind the leading edge.

Now, Intel is ponying up $1 billion in an attempt to avoid repeating history.

The company announced an “innovation fund” this week that places bets on a couple of key technologies, chief among them RISC-V, a free, open source instruction set that shows promise in low-power and embedded systems, markets that are expected to grow significantly over the next several years.

Read 10 remaining paragraphs | Comments

#arm, #chip-foundries, #intel, #nvidia, #policy, #risc-v, #semiconductors, #sifive, #tsmc

Nvidia abandons $66 billion Arm purchase

Nvidia abandons $66 billion Arm purchase

Enlarge (credit: Arm)

SoftBank’s $66 billion sale of UK-based chip business Arm to Nvidia collapsed on Monday after regulators in the US, UK, and EU raised serious concerns about its effects on competition in the global semiconductor industry, according to three people with direct knowledge of the transaction.

The deal, the largest ever in the chip sector, would have given California-based Nvidia control of a company that makes technology at the heart of most of the world’s mobile devices. A handful of Big Tech companies that rely on Arm’s chip designs, including Qualcomm and Microsoft, had objected to the purchase.

SoftBank will receive a break-up fee of up to $1.25 billion and is seeking to unload Arm through an initial public offering before the end of the year, said one of the people.

Read 10 remaining paragraphs | Comments

#antitrust, #arm, #competition, #nvidia, #policy, #tech

RGB keyboard feature renews hope for RTX Chromebooks

RGB keyboard feature renews hope for RTX Chromebooks

Enlarge

It has been two years since Google sparked dreams of PC gaming coming to Chromebooks. We’ve yet to hear word on when we’ll be able to frag on Chrome OS, but we now know that work is being done to bring RGB-backlit keyboards to the operating system. And since RGB and gaming go hand in hand, these keyboards could find their way into potential Chromebooks with Nvidia RTX graphics cards.

In April, Nvidia announced that it is working with MediaTek, which makes the SoCs in many Chromebooks, to create a reference platform that supports Chromium and Nvidia SDKs, as well as Linux. In a press release, the GPU maker promised to bring together RTX graphics cards and Arm-based chips to deliver ray tracing “to a new class of laptops.” In 2021, Nvidia demoed RTX on a MediaTek Kompanio 1200, a chip that MediaTek says will be in “some of the biggest Chromebook brands.”

The news came more than a year after Google announced that it was working on bringing Steam to Chromebooks. It doesn’t matter if the laptops have RTX graphics if there are no PC games worth playing on them. There hasn’t been much news on RTX or Steam support since. But at least we know that work is underway on another part of making gaming on Chromebooks a thing: RGB.

Read 3 remaining paragraphs | Comments

#chromebook, #google, #laptops, #mediatek, #nvidia, #tech

Nvidia ready to abandon Arm acquisition, report says

Nvidia ready to abandon Arm acquisition, report says

Enlarge (credit: Pavlo Gonchar/SOPA Images/LightRocket)

Nvidia may be walking away from its acquisition of Arm Ltd., the British chip designer, according to a report from Bloomberg.

The blockbuster deal faced global scrutiny, and Nvidia apparently feels that it hasn’t made sufficient progress in convincing regulators that the acquisition won’t harm competition or national security. “Nvidia has told partners that it doesn’t expect the transaction to close, according to one person who asked not to be identified because the discussions are private,” Bloomberg reported.

In a further sign that the deal is likely to be abandoned, SoftBank is also working to take Arm public, according to the report.

Read 9 remaining paragraphs | Comments

#antitrust, #arm, #nvidia, #policy, #semiconductors, #takeover

Nvidia expands the RTX 3000 series with new high- and low-end GPUs

Nvidia's "next BFGPU."

Enlarge / Nvidia’s “next BFGPU.”

Nvidia used its CES “special address” today to tease the company’s top-of-the-line RTX 3090 Ti GPU alongside other GPUs and a completely new class of “dual format” gaming monitor aimed at esports pros.

The 3090 Ti, which Geforce Senior VP Jeff Fisher referred to as the company’s “next BFGPU,” will include a hefty 24GB of G6X memory, capable of up to 21Gbps of bandwidth (Nvidia called it the “fastest ever” in its GPUs). That will help the card push out an impressive 40 Shader-Teraflops, 78 RT-Teraflops, and 320 Tensor-Terfalops, Fisher said. Pricing and release date info weren’t discussed, but more details will be available “later this month,” he added.

Elsewhere in the RTX line, Nvidia announced the RTX 3050, a $249 GPU available starting January 27. Sold in the presentation as an upgrade to the aging GTX 1050 budget workhorse, the 3050 sports 2nd-generation RT cores and 3rd-generation tensor cores using Nvidia’s Ampere architecture. That will let it run AAA games like Doom Eternal and Guardians of the Galaxy at 60 fps or higher with DLSS on, even with ray-tracing enabled, Fisher said. The 3050 will be capable of 9 Shader-Teraflops and 18 RT-Teraflops and come with 8GB of G6 memory.

Read 4 remaining paragraphs | Comments

#gaming-culture, #gpu, #graphics-card, #nvidia, #rtx

Nvidia’s GeForce Now brings 1600p game streaming to M1 MacBooks

2020 MacBook Air.

Enlarge / 2020 MacBook Air. (credit: Apple)

Nvidia’s GeForce Now, like Google Stadia, lets you stream PC games to your computer, even if it’s not powerful enough to run the titles natively. Nvidia’s data centers stream content from the cloud, which means you can do high-level gaming on machines like thin-and-light Windows laptops or even MacBooks. And at a time when finding a modern GPU feels like searching for a unicorn, the idea of game streaming is starting to make even more sense.

There are some caveats, of course. There’s bound to be some lag, whether it comes from latency, Internet connectivity issues, or bandwidth overages. And while gaming services like GeForce Now aim to make it easy to run PC games on a Mac, users of the MacBook Air and MacBook Pro have had to make another sacrifice: resolution.

A different resolution

The MacBook Air has a native resolution of 2560×1600, as does the 13-inch MacBook Pro. If that seems like an odd resolution, that’s because the laptops use the 16:10 aspect ratio. 16:9 is still the most common aspect ratio among laptops, making 2560×1440 more common as well. GeForce Now subscribers with a MacBook Air or 13-inch MacBook Pro have had to resort to using that resolution, which is technically less sharp than the Macs’ native resolution (4,096,000 versus 3,686,400 pixels).

Read 7 remaining paragraphs | Comments

#ars-shopping, #gaming-culture, #geforce-now, #m1-macbook-air, #macbook-air, #macbook-pro, #nvidia, #tech

FTC sues Nvidia to preserve Arm’s status as “Switzerland” of semiconductors

FTC sues Nvidia to preserve Arm’s status as “Switzerland” of semiconductors

Enlarge (credit: Arm)

The Federal Trade Commission has sued to block Nvidia’s acquisition of Arm, the semiconductor design firm, saying that the blockbuster deal would unfairly stifle competition.

“The FTC is suing to block the largest semiconductor chip merger in history to prevent a chip conglomerate from stifling the innovation pipeline for next-generation technologies,” Holly Vedova, director of the FTC’s competition bureau, said in a statement. “Tomorrow’s technologies depend on preserving today’s competitive, cutting-edge chip markets. This proposed deal would distort Arm’s incentives in chip markets and allow the combined firm to unfairly undermine Nvidia’s rivals.”

Nvidia first announced its intention to acquire Arm in September 2020. At the time, the deal was worth $40 billion, but since then, Arm’s stock price has soared, and the cost of the cash and stock transaction has risen to $75 billion. The FTC lawsuit threatens to scuttle the deal entirely.

Read 9 remaining paragraphs | Comments

#antitrust, #arm, #ftc, #nvidia, #policy, #semiconductors

Nvidia acquisition of Arm now under scrutiny by FTC

Nvidia acquisition of Arm now under scrutiny by FTC

Enlarge (credit: Getty Images)

The US has raised potential objections to Nvidia’s controversial acquisition of the UK chip design company Arm from SoftBank, adding a fresh hurdle to a deal that has already stirred up serious opposition on the other side of the Atlantic.

News that American regulators shared European concerns came a day after the UK launched an in-depth investigation into the transaction on competition and national security grounds. The European Commission began its own extended review late last month.

Despite the mounting signs that regulators may try to block the deal, Nvidia said on Wednesday that it still believed “in the merits and benefits of the acquisition to Arm, its licensees and the industry.”

Read 10 remaining paragraphs | Comments

#antitrust, #arm, #competition, #nvidia, #policy, #tech, #us

UK announces national security probe of Nvidia’s $54 billion Arm deal

UK announces national security probe of Nvidia’s $54 billion Arm deal

Enlarge (credit: VGG | Getty Images)

The British government has launched an in-depth investigation into Nvidia’s takeover of the UK-based technology company Arm on national security grounds, throwing another hurdle in the path of the $54 billion deal.

Digital and culture secretary Nadine Dorries has ordered a phase 2 investigation into the transaction on public interest grounds, meaning it will now be subject to a full-blown probe into antitrust and security issues. The UK competition watchdog uncovered “serious competition concerns” with the deal in July.

In a letter to the parties published on Tuesday, the government said: “The secretary of state believes that the ubiquity of Arm technology makes the accessibility and reliability of Arm IP necessary for national security.”

Read 12 remaining paragraphs | Comments

#antitrust, #arm, #competition, #cpu, #nvidia, #policy, #tech, #uk

Asus takes it back to 2019 with new GTX 1650 OLED laptop

Asus takes it back to 2019 with new GTX 1650 OLED laptop

Enlarge (credit: Asus)

Asus’ latest machines aimed at creators offer the latest and not-so-latest Nvidia mobile graphics. Armed with newer tech, like an OLED panel with up to a 90 Hz refresh rate and 11th-gen Intel and AMD Ryzen 5000-series mobile CPUs, the Asus Vivobook Pro 14 and 15 OLED laptops announced today also have some interesting choices for graphics: either the current-gen RTX 3050 or the GTX 1650, a card that first debuted two generations ago.

The mobile GTX 1650 originally came out in 2019 with GDDR5 memory. But in 2020, as current-generation cards were virtually impossible to find at anywhere near MSRP, Nvidia released a new variant with GDDR6 memory, boosting bandwidth from 128 to 192 Gbps. At the time, Nvidia told PC Gamer that “the industry is running out of GDDR5.”

Asus’ new Vivobooks employ the Max-Q version of the GTX 1650, allowing the 14-inch laptop to measure just 0.76 inches (19.25 mm) thick and the 15-inch version 0.74 inches (18.9 mm). The trim GPU carries 4GB of GDDR6 memory and is said to be specced to hit a clock speed of up to 1,245 MHz with a total graphics power (TGP) of 35W. For comparison, the RTX 3050 is only available in the 15-inch version Vivobook Pro OLED with 4GB of GDDR6 and up to 1,500 MHz at 35W (50W with Dynamic Boost).

Read 6 remaining paragraphs | Comments

#asus, #asus-vivobook-pro, #gtx-1650, #nvidia, #tech

God of War’s 2018 reboot arrives on PC in January 2022

Sony’s bullish-if-slow attitude toward launching its biggest PlayStation exclusives on Windows PCs will continue in January 2022 with a release that pretty much everybody saw coming. The critically acclaimed 2018 reboot of God of War will be coming to PC.

What fans probably didn’t expect, however, was for Sony to ally with Nvidia for the release.

The PC version of God of War, arriving on January 14, 2022, will retail for $49.99, and its Steam listing already includes some technical details as of Wednesday morning. The most interesting addition is entirely new for Sony Interactive Entertainment launches on PC: support for Nvidia’s Deep Learning Super Sampling (DLSS) standard.

Read 8 remaining paragraphs | Comments

#dlss, #gaming-culture, #god-of-war, #nvidia, #nvidia-dlss, #sony-interactive-entertainment

RX 6600 GPU review: Not likely to jolt AMD’s paltry Steam survey numbers

AMD’s latest lower-end graphics card, the RX 6600, is its sixth RDNA 2 offering in the past 12 months—a fact that might lead you to believe the company is making a killing in the world of PC GPUs these days. But the little public-facing data we have doesn’t bear that out.

Both AMD and Nvidia are in similar chip-shortage boats—all leaky and going down the same hellish supply chain creek without a paddle. Yet, Steam hardware surveys have told a tale of Nvidia enjoying a noticeable installation lead with its current-day RTX 3000 series of GPUs (5.76 percent of all registered GPUs on Steam in September 2021, excluding laptop variants) over AMD’s RDNA 2 (0.16 percent in the form of a single GPU, and that’s not a decimal-point typo). You might assume this would compel AMD to try something drastic with its latest GPU.

That’s not the case this month. AMD’s RX 6600, which goes on sale at some point today, is nowhere near the drastic card that AMD arguably needs right now. It’s loudly positioned as a “1080p” resolution card… just like its older sibling, the RX 6600XT, which came and went in August. In fact, both cards involve AMD’s Navi 23 die, with the 6600 either copying or slashing specs while also dropping in MSRP from $379 to $329.

Read 14 remaining paragraphs | Comments

#amd, #amd-rdna-2, #amd-rdna-2-series, #gaming-culture, #gpus, #graphics-cards, #nvidia, #rdna-2, #rtx-3000-series

These virtual obstacle courses help real robots learn to walk

A clip from the simulation where virtual robots learn to climb steps.

An army of more than 4,000 marching doglike robots is a vaguely menacing sight, even in a simulation. But it may point the way for machines to learn new tricks.

The virtual robot army was developed by researchers from ETH Zurich in Switzerland and chipmaker Nvidia. They used the wandering bots to train an algorithm that was then used to control the legs of a real-world robot.

In the simulation, the machines—called ANYmals—confront challenges like slopes, steps, and steep drops in a virtual landscape. Each time a robot learned to navigate a challenge, the researchers presented a harder one, nudging the control algorithm to be more sophisticated.

Read 18 remaining paragraphs | Comments

#ai, #artificial-intelligence, #nvidia, #robotics, #science, #tech

3 methodologies for automated video game highlight detection and capture

With the rise of livestreaming, gaming has evolved from a toy-like consumer product to a legitimate platform and medium in its own right for entertainment and competition.

Twitch’s viewer base alone has grown from 250,000 average concurrent viewers to over 3 million since its acquisition by Amazon in 2014. Competitors like Facebook Gaming and YouTube Live are following similar trajectories.

The boom in viewership has fueled an ecosystem of supporting products as today’s professional streamers push technology to its limit to increase the production value of their content and automate repetitive aspects of the video production cycle.

The largest streamers hire teams of video editors and social media managers, but growing and part-time streamers struggle to do this themselves or come up with the money to outsource it.

The online streaming game is a grind, with full-time creators putting in eight- if not 12-hour performances on a daily basis. In a bid to capture valuable viewer attention, 24-hour marathon streams are not uncommon either.

However, these hours in front of the camera and keyboard are only half of the streaming grind. Maintaining a constant presence on social media and YouTube fuels the growth of the stream channel and attracts more viewers to catch a stream live, where they may purchase monthly subscriptions, donate and watch ads.

Distilling the most impactful five to 10 minutes of content out of eight or more hours of raw video becomes a non-trivial time commitment. At the top of the food chain, the largest streamers can hire teams of video editors and social media managers to tackle this part of the job, but growing and part-time streamers struggle to find the time to do this themselves or come up with the money to outsource it. There aren’t enough minutes in the day to carefully review all the footage on top of other life and work priorities.

Computer vision analysis of game UI

An emerging solution is to use automated tools to identify key moments in a longer broadcast. Several startups compete to dominate this emerging niche. Differences in their approaches to solving this problem are what differentiate competing solutions from each other. Many of these approaches follow a classic computer science hardware-versus-software dichotomy.

Athenascope was one of the first companies to execute on this concept at scale. Backed by $2.5 million of venture capital funding and an impressive team of Silicon Valley Big Tech alumni, Athenascope developed a computer vision system to identify highlight clips within longer recordings.

In principle, it’s not so different from how self-driving cars operate, but instead of using cameras to read nearby road signs and traffic lights, the tool captures the gamer’s screen and recognizes indicators in the game’s user interface that communicate important events happening in-game: kills and deaths, goals and saves, wins and losses.

These are the same visual cues that traditionally inform the game’s player what is happening in the game. In modern game UIs, this information is high-contrast, clear and unobscured, and typically located in predictable, fixed locations on the screen at all times. This predictability and clarity lends itself extremely well to computer vision techniques such as optical character recognition (OCR) — reading text from an image.

The stakes here are lower than self-driving cars, too, since a false positive from this system produces nothing more than a less-exciting-than-average video clip — not a car crash.

#artificial-intelligence, #athenascope, #column, #computer-graphics, #ec-column, #ec-gaming, #gaming, #nvidia, #overwolf, #software-development-kit, #tc

NVIDIA’s latest tech makes AI voices more expressive and realistic

The voices on Amazon’s Alexa, Google Assistant and other AI assistants are far ahead of old-school GPS devices, but they still lack the rhythms, intonation and other qualities that make speech sound, well, human. NVIDIA has unveiled new research and tools that can capture those natural speech qualities by letting you train the AI system with your own voice, the company announced at the Interspeech 2021 conference.

To improve its AI voice synthesis, NVIDIA’s text-to-speech research team developed a model called RAD-TTS, a winning entry at an NAB broadcast convention competition to develop the most realistic avatar. The system allows an individual to train a text-to-speech model with their own voice, including the pacing, tonality, timbre and more.

Another RAD-TTS feature is voice conversion, which lets a user deliver one speaker’s words using another person’s voice. That interface gives fine, frame-level control over a synthesized voice’s pitch, duration and energy.

Using this technology, NVIDIA’s researchers created more conversational-sounding voice narration for its own I Am AI video series using synthesized rather than human voices. The aim was to get the narration to match the tone and style of the videos, something that hasn’t been done well in many AI narrated videos to date. The results are still a bit robotic, but better than any AI narration I’ve ever heard.

“With this interface, our video producer could record himself reading the video script, and then use the AI model to convert his speech into the female narrator’s voice. Using this baseline narration, the producer could then direct the AI like a voice actor — tweaking the synthesized speech to emphasize specific words, and modifying the pacing of the narration to better express the video’s tone,” NVIDIA wrote.

NVIDIA is distributing some of this research — optimized to run efficiently on NVIDIA GPUs, of course — to anyone who wants to try it via open source through the NVIDIA NeMo Python toolkit for GPU-accelerated conversational AI, available on the company’s NGC hub of containers and other software.

“Several of the models are trained with tens of thousands of hours of audio data on NVIDIA DGX systems. Developers can fine tune any model for their use cases, speeding up training using mixed-precision computing on NVIDIA Tensor Core GPUs,” the company wrote.

Editor’s note: This post originally appeared on Engadget.

#ai, #artificial-intelligence, #column, #nvidia, #speech-synthesis, #tc, #tceng, #voice-assistant

EU set to launch formal probe into Nvidia’s $54 billion takeover of Arm

EU set to launch formal probe into Nvidia’s $54 billion takeover of Arm

Enlarge (credit: Arm)

Brussels is set to launch a formal competition probe early next month into Nvidia’s planned $54 billion takeover of British chip designer Arm, after months of informal discussions between regulators and the US chip company.

The investigation is likely to begin after Nvidia officially notifies the European Commission of its plan to acquire Arm, with the US chipmaker planning to make its submission in the week starting September 6, according to two people with direct knowledge of the process. They added that the date might yet change, however.

Brussels’ investigation would come after the UK’s Competition and Markets Authority said its initial assessment of the deal suggested there were “serious competition concerns” and that a set of remedies suggested by Nvidia would not be sufficient to address them.

Read 6 remaining paragraphs | Comments

#antitrust, #arm, #eu, #nvidia, #policy

Nvidia-ARM takeover raises serious antitrust concerns, finds UK’s CMA

The UK’s competition watchdog has raised serious concerns about Nvidia’s proposed takeover of chip designer, ARM.

Its assessment was published today by the government which will now need to decide whether to ask the Competition and Markets Authority (CMA) to carry out an in-depth probe into the proposed acquisition.

In the executive summary of the CMA’s report for the government the watchdog sets out concerns that if the deal were to go ahead the merged business would have the ability and incentive to harm the competitiveness of Nvidia’s rivals by restricting access to ARM’s IP which is used by companies that produce semiconductor chips and related products, in competition with Nvidia.

The CMA is worried that the loss of competition could stifle innovation across a number of markets — including data centres, gaming, the ‘internet of things’, and self-driving cars, with the resulting risk of more expensive or lower quality products for businesses and consumers.

A behavioral remedy offered by Nvidia was rejected by the CMA — which has recommended moving to an in-depth ‘Phase 2’ investigation of the proposed merger on competition grounds. 

Commenting in a statement, CEO Andrea Coscelli said: “We’re concerned that Nvidia controlling Arm could create real problems for NVIDIA’s rivals by limiting their access to key technologies, and ultimately stifling innovation across a number of important and growing markets. This could end up with consumers missing out on new products, or prices going up.

“The chip technology industry is worth billions and is vital to products that businesses and consumers rely on every day. This includes the critical data processing and datacentre technology that supports digital businesses across the economy, and the future development of artificial intelligence technologies that will be important to growth industries like robotics and self-driving cars.”

Nvidia has been contacted for comment.

In a statement on its website, the Department for Digital, Media, Culture and Sport said the UK’s digital secretary is now “considering the relevant information contained in the full report” and will make a decision on whether to ask the CMA to conduct a ‘Phase Two’ investigation “in due course”.

“There is no set period in which this decision must be made, but it must take into account the need to make a decision as soon as reasonably practicable to reduce uncertainty,” it added. 

The proposed merger has faced considerable domestic opposition with opponents including one of the co-founders of ARM calling for it to be blocked.

#arm, #artificial-intelligence, #competition, #competition-and-markets-authority, #europe, #nvidia, #self-driving-cars, #united-kingdom

AMD RX 6600XT review: A sad trombone noise of a “$379” 2021 GPU

In a traditional PC hardware cycle, AMD’s new RX 6600XT could have been a welcome stopgap for a budget audience. Over the years, we’ve regularly seen this kind of GPU from both major GPU manufacturers. Those companies regularly turn down some specs, repurpose sub-optimal chips, and get a moderately priced option to follow their biggest kahunas for anybody tiptoeing into solid 1080p or 1440p gaming options on PC.

Unfortunately, there’s nothing traditional about the latest traditional PC hardware cycle. Today’s supply-and-demand ecosystem of computer GPUs looks like something out of a terrifying Dario Argento film. The horrors that lurk in every shadow include chip shortages and bot-fueled scalper waves.

And that context really helps us frame the $379 RX 6600XT—an underpowered, overpriced, and downright disappointing GPU whose primary sales pitch is 1080p gaming. That category is famously CPU-limited, not GPU-limited, so this GPU’s mileage will truly vary based on your rig. In general, Nvidia’s RTX 3060 Ti (at only $20 more MSRP) wins handily, while Nvidia’s RTX 2060 Super (which launched for $399 in July 2019) is within shouting distance of this brand-new card. That latter yardstick in particular makes AMD’s newest product a hard GPU to recommend.

Read 20 remaining paragraphs | Comments

#amd, #features, #gaming-culture, #gpu, #gpus, #nvidia, #rdna-2, #tech

Embodied AI, superintelligence and the master algorithm

Superintelligence, roughly defined as an AI algorithm that can solve all problems better than people, will be a watershed for humanity and tech.

Even the best human experts have trouble making predictions about highly probabilistic, wicked problems. And yet those wicked problems surround us. We are all living through immense change in complex systems that impact the climate, public health, geopolitics and basic needs served by the supply chain.

Just determining the best way to distribute COVID-19 vaccines without the help of an algorithm is practically impossible. We need to get smarter in how we solve these problems — fast.

Superintelligence, if achieved, would help us make better predictions about challenges like natural disasters, building resilient supply chains or geopolitical conflict, and come up with better strategies to solve them. The last decade has shown how much AI can improve the accuracy of our predictions. That’s why there is an international race among corporations and governments around superintelligence.

In the next year and a half, we’re going to see increasing adoption of technologies that will trigger a broader industry shift, much as Tesla triggered the transition to EVs.

Highly credible think tanks like Deepmind and OpenAI say that the path to superintelligence is visible. Last month, Deepmind said reinforcement learning (RL) could get us there, and RL is at the heart of embodied AI.

What is embodied AI?

Embodied AI is AI that controls a physical “thing,” like a robot arm or an autonomous vehicle. It is able to move through the world and affect a physical environment with its actions, similar to the way a person does. In contrast, most predictive models live in the cloud doing things such as classifying text or images, steering flows of bits without ever moving a body through three-dimensional space.

For those who work in software, including AI researchers, it is too easy to forget the body. But any superintelligent algorithm needs to control a body because so many of the problems we confront as humans are physical. Firestorms, coronaviruses and supply chain breakdowns need solutions that aren’t just digital.

All the crazy Boston Dynamics videos of robots jumping, dancing, balancing and running are examples of embodied AI. They show how far we’ve come from early breakthroughs in dynamic robot balancing made by Trevor Blackwell and Anybots more than a decade ago. The field is moving fast and, in this revolution, you can dance.

What’s blocked embodied AI up until now?

Challenge 1: One of the challenges when controlling machines with AI is the high dimensionality of the world — the sheer range of things that can come at you.

#amd, #artificial-intelligence, #boston-dynamics, #column, #cybernetics, #deep-learning, #deep-reinforcement-learning, #deepmind, #hardware, #machine-learning, #nvidia, #robotics, #tc

DNSFilter secures $30M Series A to step up fight against DNS-based threats

DNSFilter, an artificial intelligence startup that provides DNS protection to enterprises, has secured $30 million in Series A funding from Insight Partners.

DNSFilter, as its name suggests, offers DNS-based web content filtering and threat protection. Unlike the majority of its competitors, which includes the likes of Palo Alto Networks and Webroot, the startup uses proprietary AI technology to continuously scan billions of domains daily, identifying anomalies and potential vectors for malware, ransomware, phishing, and fraud. 

“Most of our competitors either rent or lease a database from some third party,” Ken Carnesi, co-founder and CEO of DNSFilter tells TechCrunch. “We do that in-house, and it’s through artificial intelligence that’s scanning these pages in real-time.” 

The company, which counts the likes of Lenovo, Newegg, and Nvidia among its 14,000 customers, claims this industry-first technology catches threats an average of five days before competitors and is capable of identifying 76% of domain-based threats. By the end of 2021, DNSFilter says it will block more than 1.1 million threats daily.

DNSFilter has seen rapid growth over the past 12 months as a result of the mass shift to remote working and the increase in cyber threats and ransomware attacks that followed. The startup saw eightfold growth in customer activity, doubled its global headcount to just over 50 employees, and partnered with Canadian software house N-Able to push into the lucrative channel market.  

“DNSFilter’s rapid growth and efficient customer acquisition are a testament to the benefits and ease of use compared to incumbents,” Thomas Krane, principal at Insight Partners, who has been appointed as a director on DNSFilter’s board. “The traditional model of top-down, hardware-centric network security is disappearing in favor of solutions that readily plug in at the device level and can cater to highly distributed workforces”

Prior to this latest funding round, which was also backed by Arthur Ventures (the lead investor in DNSFilter’s seed round), CrowdStrike co-founder and former chief technology officer  Dmitri Alperovitch also joined DNSFilter’s board of directors. 

Carnesi said the addition of Alperovitch to the board will help the company get its technology into the hands of enterprise customers. “He’s helping us to shape the product to be a good fit for enterprise organizations, which is something that we’re doing as part of this round — shifting focus to be primarily mid-market and enterprise,” he said.

The company also recently added former CrowdStrike vice president Jen Ayers as its chief operating officer. “She used to manage their entire managed threat hunting team, so she’s definitely coming on for the security side of things as we build out our domain intelligence team further,” Carnesi said.

With its newly-raised funds, DNSFilter will further expand its headcount, with plans to add more than 80 new employees globally over the next 12 months.

“There’s a lot more that we can do for security via DNS, and we haven’t really started on that yet,” Carnesi said. “We plan to do things that people won’t believe were possible via DNS.”

The company, which acquired Web Shrinker in 2018, also expects there to be more acquisitions on the cards going forward. “There are some potential companies that we’d be looking to acquire to speed up our advancement in certain areas,” Carnesi said.

#arthur-ventures, #artificial-intelligence, #co-founder, #computing, #coo, #crowdstrike, #cto, #cyberwarfare, #director, #dns, #funding, #information-technology, #insight-partners, #lenovo, #newegg, #nvidia, #palo-alto-networks, #ransomware, #security, #startup-company, #techcrunch, #vp, #webroot

Texan city to deploy intelligent traffic system from Velodyne Lidar

Velodyne lidar sensors will create a real-time 3D map of this dangerous intersection in Austin to help the city make its roads safer.

Enlarge / Velodyne lidar sensors will create a real-time 3D map of this dangerous intersection in Austin to help the city make its roads safer. (credit: Velodyne)

American roads have never been especially safe compared to those in other countries. But the pandemic made things worse, with shocking rises in crashes and deaths of both drivers and pedestrians in 2020 despite a decrease in the number of miles Americans traveled. Reducing this catastrophic casualty rate is a goal of the autonomous vehicle industry, which often cites a (probably misleading) statistic claiming that 94 percent of all fatal crashes are due to human error.

But some of the technology companies driving the AV revolution are also interested in improving traffic infrastructure. On Wednesday, Velodyne Lidar announced that it will deploy a lidar-based traffic-monitoring system to a dangerous intersection in Austin, Texas, as part of its Intelligent Infrastructure Solution.

The system will create real-time 3D maps of roads and intersections, replacing the current combination of inductive loop detectors, cameras, and radar. Velodyne has joined Nvidia’s Metropolis program and will use Nvidia Jetson AGX Xavier edge processors to interpret the lidar data.

Read 2 remaining paragraphs | Comments

#austin, #cars, #edge-computing, #nvidia, #texas, #traffic-safety, #velodyne

Volvo Cars sets the tone for its next-gen vehicles with ‘Concept Recharge’ EV

Volvo Cars wants to completely electrify its lineup by 2030 and on Wednesday offered a glimpse into how it plans to get there and what its next generation of vehicles might look like.

But it’s not going to do it alone. Although the automaker plans on developing its own in-car operating system and other parts of the car, Volvo Cars detailed how it plans to work with partners like Northvolt, Google and Luminar to build out its future vehicles lineup. It also unveiled the first images of “Concept Recharge,” a concept EV that has flat floors, two interior screens and rear “suicide doors” that open from the middle of the vehicle.

Volvo Concept Recharge. Image Credits: Volvo Cars

The Concept Recharge is also outfitted with Luminar sensors, in line with an announcement earlier this month that Volvo Cars’ forthcoming flagship electric SUV will be equipped with Luminar’s technology stack as standard.

On the battery front, Volvo Cars is working with Swedish battery developer Northvolt on a pack that it says will enable a range of up to around 621 miles — a massive achievement of energy density, should Northvolt pull it off. The two companies are aiming to build a gigafactory in Europe by 2026 in a new 50-50 joint venture, with a potential annual capacity of up to 50 gigawatt hours. Volvo Cars will also source 15 GWh of batteries from Northvolt’s battery plant in Skellefteå, Sweden from 2024.

Future Volvo Cars vehicles will be capable of bidirectional charging, a capability that can turn the EV into a mobile generator or a mini power plant, offloading excess energy to the electricity grid.

Volvo said its OS, VolvoCars.OS, will act as an “umbrella system” for underlying operating systems, including its infotainment system led by Google and tech from Linux, QNX and AUTOSAR. While the vehicle will contain up to 100 electrical control units, these will run on a core computing system made up of three main computers being developed in partnership with Nvidia.

The automaker also discussed in more detail its plans to equip its flagship electric SUV with Luminar’s sensor suite and technology from Volvo’s software arm Zenseact. Executives shirked questions asking to specify the level of the autonomous system — referring to the scale developed by the Society of Automobile Engineers to measure the level of autonomy in a driving system — saying that they preferred to discuss the forthcoming AV driving system in terms of supervised or unsupervised. Under those terms, Volvo said the two modes — Cruise and Ride— would require driver supervision and no supervision, respectively. It said it would gradually launch unsupervised functionality at some point in the future.

The forthcoming system will generate tons of driving data from customers, and Volvo doesn’t intend on it to go to waste. The automaker said it aims to build a data factory to process information it collects from customers that use its autonomous drive safety features (with their consent). It would use this data to make improvements on the system, which it would push to vehicles via over-the-air updates.

“We need to transform this company from just a premium conventional company. We need to transform it into a leader in the new premium electric segment, which is growing very fast,” Volvo CEO Håkan Samuelsson said. “We need to understand batteries in the same way we understand the combustion engine.”

#android, #automotive, #battery-electric-vehicles, #electric-vehicles, #google, #luminar, #northvolt, #nvidia, #tc, #transportation, #volvo-cars

Nvidia’s Canvas AI painting tool instantly turns blobs into realistic landscapes

AI has been filling in the gaps for illustrators and photographers for years now — literally, it intelligently fills gaps with visual content. But the latest tools are aimed at letting an AI give artists a hand from the earliest, blank-canvas stages of a piece. Nvidia’s new Canvas tool lets the creator rough in a landscape like paint-by-numbers blobs, then fills it in with convincingly photorealistic (if not quite gallery-ready) content.

Each distinct color represents a different type of feature: mountains, water, grass, ruins, etc. When colors are blobbed onto the canvas, the crude sketch is passed to a generative adversarial network. GANs essentially pass content back and forth between a creator AI that tries to make (in this case) a realistic image and a detector AI that evaluates how realistic that image is. These work together to make what they think is a fairly realistic depiction of what’s been suggested.

It’s pretty much a more user-friendly version of the prototype GauGAN (get it?) shown at CVPR in 2019. This one is much smoother around the edges, produces better imagery, and can run on any Windows computer with a decent Nvidia graphics card.

This method has been used to create very realistic faces, animals and landscapes, though there’s usually some kind of “tell” that a human can spot. But the Canvas app isn’t trying to make something indistinguishable from reality — as concept artist Jama Jurabaev explains in the video below, it’s more about being able to experiment freely with imagery more detailed than a doodle.

For instance, if you want to have a moldering ruin in a field with a river off to one side, a quick pencil sketch can only tell you so much about what the final piece might look like. What if you have it one way in your head, and then two hours of painting and coloring later you realize that because the sun is setting on the left side of the painting, it makes the shadows awkward in the foreground?

If instead you just scribbled these features into Canvas, you might see that this was the case right away, and move on to the next idea. There are even ways to quickly change the time of day, palette, and other high-level parameters so they can quickly be evaluated as options.

Animation of an artist sketching while an AI interprets his strokes as photorealistic features.

Image Credits: Nvidia

“I’m not afraid of blank canvas any more,” said Jurabaev. “I’m not afraid to make very big changes, because I know there’s always AI helping me out with details… I can put all my effort into the creative side of things, and I’ll let Canvas handle the rest.”

It’s very like Google’s Chimera Painter, if you remember that particular nightmare fuel, in which an almost identical process was used to create fantastic animals. Instead of snow, rock and bushes, it had hind leg, fur, teeth and so on, which made it rather more complicated to use and easy to go wrong with.

Image Credits: Devin Coldewey / Google

Still, it may be better than the alternative, for certainly an amateur like myself could never draw even the weird tube-like animals that resulted from basic blob painting.

Unlike the Chimera Creator, however, this app is run locally, and requires a beefy Nvidia video card to do it. GPUs have long been the hardware of choice for machine learning applications, and something like a real-time GAN definitely needs a chunky one. You can download the app for free here.

 

#apps, #artificial-intelligence, #machine-learning, #media, #nvidia, #science

Nvidia acquires hi-def mapping startup DeepMap to bolster AV technology

Chipmaker Nvidia is acquiring DeepMap, the high-definition mapping startup announced. The company said its mapping IP will help Nvidia’s autonomous vehicle technology sector, Nvidia Drive.

“The acquisition is an endorsement of DeepMap’s unique vision, technology and people,” said Ali Kani, vice president and general manager of Automotive at Nvidia, in a statement. “DeepMap is expected to extend our mapping products, help us scale worldwide map operations and expand our full self-driving expertise.”

One of the biggest challenges to achieving full autonomy in a passenger vehicle is achieving proper localization and updated mapping information that reflects current road conditions. By integrating DeepMap’s tech, Nvidia’s autonomous stack should have greater precision, giving the vehicle enhanced abilities to locate itself on the road.

“Joining forces with Nvidia will allow our technology to scale more quickly and benefit more people sooner. We look forward to continuing our journey as part of the Nvidia team,” said James Wu, co-founder and CEO of DeepMap, in a statement.

DeepMap — founded by former employees of Google, Apple and Baidu James Wu and Mark Wheeler — can use Nvidia Drive’s software-defined platform to scale its maps across AV fleets quickly and without using too much data storage via over-the-air updates. Nvidia will also invest into new capabilities for DeepMap as part of the partnership.

Nvidia is expected to finalize the acquisition in Q3 2021.

#autonomous-vehicles, #deepmap, #ma, #nvidia, #transportation

Microsoft plans to launch dedicated Xbox cloud gaming hardware

Microsoft will soon launch a dedicated device for game streaming, the company announced today. It’s also working with a number of TV manufacturers to build the Xbox experience right into their internet-connected screens and Microsoft plans to bring build cloud gaming to the PC Xbox app later this year, too, with a focus on play-before-you-buy scenarios.

It’s unclear what these new game streaming devices will look like. Microsoft didn’t provide any further details. But chances are, we’re talking about either a Chromecast-like streaming stick or a small Apple TV-like box. So far, we also don’t know which TV manufacturers it will partner with.

It’s no secret that Microsoft is bullish about cloud gaming. With Xbox Game Pass Ultimate, it’s already making it possible for its subscribers to play more than 100 console games on Android, streamed from the Azure cloud, for example. In a few weeks, it’ll open cloud gaming in the browser on Edge, Chrome and Safari, to all Xbox Game Pass Ultimate subscribers (it’s currently in limited beta). And it is bringing Game Pass Ultimate to Australia, Brazil, Mexico and Japan later this year, too.

In many ways, Microsoft is unbundling gaming from the hardware — similar to what Google is trying with Stadia (an effort that, so far, has fallen flat for Google) and Amazon with Luna. The major advantage Microsoft has here is a large library of popular games, something that’s mostly missing on competing services, with the exception of Nvidia’s GeForce Now platform — though that one has a different business model since its focus is not on a subscription but on allowing you to play the games you buy in third-party stores like Steam or the Epic store.

What Microsoft clearly wants to do is expand the overall Xbox ecosystem, even if that means it sells fewer dedicated high-powered consoles. The company likens this to the music industry’s transition to cloud-powered services backed by all-you-can-eat subscription models.

“We believe that games, that interactive entertainment, aren’t really about hardware and software. It’s not about pixels. It’s about people. Games bring people together,”
said Microsoft’s Xbox head Phil Spencer. “Games build bridges and forge bonds, generating mutual empathy among people all over the world. Joy and community -that’s why we’re here.”

It’s worth noting that Microsoft says it’s not doing away with dedicated hardware, though, and is already working on the next generation of its console hardware — but don’t expect a new Xbox console anytime soon.

#amazon, #android, #australia, #brazil, #cloud-gaming, #computing, #directx, #gadgets, #gaming, #google, #hardware, #japan, #luna, #mexico, #microsoft, #nvidia, #phil-spencer, #tc, #xbox, #xbox-cloud-gaming, #xbox-game-pass

RTX 3070 Ti review: Nvidia leaves the GPU fast lane (for now)

In a normal GPU marketplace, Nvidia’s new GPU—the RTX 3070 Ti—would land either as a welcome jump or a power-per-watt disappointment. In the chip-shortage squeeze of 2021, however, both its biggest successes and shortcomings may slip by without much fanfare.

The company’s RTX 3070 launched eight months ago at an MSRP of $499, and it did so at an incredibly efficient power-to-performance ratio. There’s simply no better 220 W GPU on the market, as the RTX 3070 noticeably pulled ahead of the 200 W RTX 3060 Ti and AMD’s 230 W RX 6700XT. That efficiency, unsurprisingly, isn’t repeated with the new model released this week: the RTX 3070 Ti. This device’s MSRP jumps 20 percent (to “$599,” but mind the scare quotes), and its TDP screams ahead at 32 percent. We’ve been here before, of course. “Ti”-branded Nvidia cards aren’t usually as power-efficient as their namesakes, and that’s fine, especially if a mild $100 price jump yields a solid increase in performance.

But the RTX 3070 Ti spec sheet doesn’t see Nvidia charge ahead in ways that might match the jump in wattage. And while the 3070 Ti’s performance mostly increases across the board, the gains aren’t in any way a revolution. That may be less about Nvidia’s design prowess and more about squeezing this thing between the impressive duo of the RTX 3070 and RTX 3080 ($699) on an MSRP basis.

Read 12 remaining paragraphs | Comments

#features, #gaming-culture, #nvidia, #nvidia-rtx, #nvidia-rtx-3070-ti, #rtx-3000-series, #tech

Nvidia and Valve are bringing DLSS to Linux gaming… sort of

Three different logos, including a cartoon penguin, have been photoshopped together.

Enlarge / Tux looks a lot more comfortable sitting on that logo than he probably should—Nvidia’s drivers are still proprietary, and DLSS support isn’t available for native Linux apps—only Windows apps running under Proton. (credit: Aurich Lawson / Jim Salter / Larry Ewing / Nvidia)

Linux gamers, rejoice—we’re getting Nvidia’s Deep Learning Super Sampling on our favorite platform! But don’t rejoice too hard; the new support only comes on a few games, and it’s only on Windows versions of those games played via Proton.

At Computex 2021, Nvidia announced a collaboration with Valve to bring DLSS support to Windows games played on Linux systems. This is good news, since DLSS can radically improve frame rates without perceptibly altering graphics quality. Unfortunately, as of this month, fewer than 60 games support DLSS in the first place; of those, roughly half work reasonably well in Proton, with or without DLSS.

What’s a DLSS, anyway?

Nvidia's own benchmarking shows well over double the frame rate in <em><a href="https://arstechnica.com/gaming/2019/02/metro-exodus-a-good-single-player-game-to-usher-in-the-pc-ray-tracing-era/">Metro Exodus</a>.</em> Most third-party benchmarks "only" show an improvement of 50 to 75 percent. Note the DLSS image actually looks sharper and cleaner than the non-DLSS in this case!

Nvidia’s own benchmarking shows well over double the frame rate in Metro Exodus. Most third-party benchmarks “only” show an improvement of 50 to 75 percent. Note the DLSS image actually looks sharper and cleaner than the non-DLSS in this case! (credit: nvidia)

If you’re not up on all the gaming graphics jargon, DLSS is an acronym for Deep Learning Super Sampling. Effectively, DLSS takes a low-resolution image and uses deep learning to upsample it to a higher resolution on the fly. The impact of DLSS can be astonishing in games that support the tech—in some cases more than doubling non-DLSS frame rates, usually with little or no visual impact.

Read 10 remaining paragraphs | Comments

#dlss, #gaming-culture, #linux, #linux-gaming, #nvidia, #proton, #steam, #tech

Review: Nvidia RTX 3080 Ti is a powerhouse—but good luck finding it at $1,199 MSRP

Nearly nine months ago, the RTX 3000 series of Nvidia graphics cards launched in a beleaguered world as a seeming ray of hope. The series’ first two GPUs, the RTX 3080 and 3070, were nearly all things to all graphics hounds. Nvidia built these cards upon the proprietary successes of the RTX 2000 series and added sheer, every-API-imaginable rasterization power on top.

An “RTX”-optimized game ran great on the line’s opening salvo of the RTX 3080, sure, but even without streamlined ray tracing or the impressive upsampling of DLSS, it tera’ed a lot of FLOPs. Talk about a fun potential purchase for nerds trapped in the house.

Even better, that power came along with more modest MSRPs compared to what we saw in the RTX 2000 series. As I wrote in September 2020:

Read 40 remaining paragraphs | Comments

#amd, #amd-radeon, #features, #gaming-culture, #nvidia, #nvidia-rtx, #rtx-3080, #rtx-3080-ti, #tech

Nvidia will add anti-mining flags to the rest of its RTX 3000 GPU series

Coming soon: Nearly identical versions of these GPUs, only with "LTR" logos—and new measures to reduce their mining hash rates.

Enlarge / Coming soon: Nearly identical versions of these GPUs, only with “LTR” logos—and new measures to reduce their mining hash rates. (credit: Sam Machkovech)

Nvidia’s GeForce RTX 3000-branded graphics cards are receiving an update off the factory lines starting this month: hardware-level flags meant to slow down the mining of the popular cryptocurrency Ethereum. Nvidia’s Tuesday announcement confirmed that most consumer-grade GPUs coming out of the company’s factories, ranging from the RTX 3060 Ti to the RTX 3080, will ship with a new sticker to indicate a “Lite Hash Rate,” or “LHR,” on the hardware, driver, and BIOS level.

If this move sounds familiar, that’s because Nvidia already took a massive swing at the cryptomining problem, only to whiff, with February’s RTX 3060. That GPU’s launch came with promises that its Ethereum mining rates had been cut in half from their full potential rate—a move meant to disincentivize miners from buying up limited stock. And in the GPU’s pre-release period, Nvidia PR Director Bryan Del Rizzo claimed on Twitter that “it’s not just a driver thing. There is a secure handshake between the driver, the RTX 3060 silicon, and the BIOS (firmware) that prevents removal of the hash rate limiter.”

Yet shortly after that card’s commercial launch, Nvidia released a developer-specific beta firmware driver that unlocked the GPU’s full mining potential. Remember: that’s firmware, not a BIOS rewrite or anything particularly invasive. With that cat out of the bag, the RTX 3060 forever became an Ethereum mining option.

Read 6 remaining paragraphs | Comments

#gpu, #gpus, #graphics-cards, #nvidia, #nvidia-rtx, #rtx-3080, #tech

Arm launches its latest chip design for HPC, data centers and the edge

Arm today announced the launch of two new platforms, Arm Neoverse V1 and Neoverse N2, as well as a new mesh interconnect for them. As you can tell from the name, V1 is a completely new product and maybe the best example yet of Arm’s ambitions in the data center, high-performance computing and machine learning space. N2 is Arm’s next-generation general compute platform that is meant to span use cases from hyperscale clouds to SmartNICs and running edge workloads. It’s also the first design based on the company’s new Armv9 architecture.

Not too long ago, high-performance computing was dominated by a small number of players, but the Arm ecosystem has scored its fair share of wins here recently, with supercomputers in South Korea, India and France betting on it. The promise of V1 is that it will vastly outperform the older N1 platform, with a 2x gain in floating-point performance, for example, and a 4x gain in machine learning performance.

Image Credits: Arm

“The V1 is about how much performance can we bring — and that was the goal,” Chris Bergey, SVP and GM of Arm’s Infrastructure Line of Business, told me. He also noted that the V1 is Arm’s widest architecture yet. He noted that while V1 wasn’t specifically built for the HPC market, it was definitely a target market. And while the current Neoverse V1 platform isn’t based on the new Armv9 architecture yet, the next generation will be.

N2, on the other hand, is all about getting the most performance per watt, Bergey stressed. “This is really about staying in that same performance-per-watt-type envelope that we have within N1 but bringing more performance,” he said. In Arm’s testing, NGINX saw a 1.3x performance increase versus the previous generation, for example.

Image Credits: Arm

In many ways, today’s release is also a chance for Arm to highlight its recent customer wins. AWS Graviton2 is obviously doing quite well, but Oracle is also betting on Ampere’s Arm-based Altra CPUs for its cloud infrastructure.

“We believe Arm is going to be everywhere — from edge to the cloud. We are seeing N1-based processors deliver consistent performance, scalability and security that customers want from Cloud infrastructure,” said Bev Crair, senior VP, Oracle Cloud Infrastructure Compute. “Partnering with Ampere Computing and leading ISVs, Oracle is making Arm server-side development a first-class, easy and cost-effective solution.”

Meanwhile, Alibaba Cloud and Tencent are both investing in Arm-based hardware for their cloud services as well, while Marvell will use the Neoverse V2 architecture for its OCTEON networking solutions.

#alibaba, #arm, #aws, #cloud-infrastructure, #cloud-services, #computing, #enterprise, #india, #machine-learning, #nvidia, #oracle, #oracle-cloud, #softbank-group, #south-korea, #svp, #tc, #technology, #tencent

Scale AI founder and CEO Alexandr Wang will join us at TC Sessions: Mobility on June 9

Last week, Scale AI announced a massive $325 million Series E. Led by Dragoneer, Greenoaks Capital and Tiger Global, the raise gives the San Francisco data labeling startup a $7 billion valuation.

Alexandr Wang founded the company back in 2016, while still at MIT. A veteran of Quora and Addepar, Wang built the startup to curate information for AI applications. The company is now a break-even business, with a wide range of top-notch clients, including General Motors, NVIDIA, Nuro and Zoox.

Backed by a ton of venture capital, the company plans a large-scale increase in its headcount, as it builds out new products and expands into additional markets. “One thing that we saw, especially in the course of the past year, was that AI is going to be used for so many different things,” Wang told TechCrunch in a recent interview. “It’s like we’re just sort of really at the beginning of this and we want to be prepared for that as it happens.”

The executive will join us on stage at TC Sessions: Mobility on June 9 to discuss how the company has made a major impact on the industry in its short four years of existence, the role AI is playing in the world of transportation and what the future looks like for Scale AI.

In addition to Wang, TC Sessions: Mobility 2021 will feature an incredible lineup of speakers, presentations, fireside chats and breakouts all focused on the current and future state of mobility — like EVs, micromobility and smart cities for starters — and the investment trends that influence them all.

Investors like Clara Brenner (Urban Innovation Fund), Quin Garcia (Autotech Ventures) and Rachel Holt (Construct Capital) — all of whom will grace our virtual stage. They’ll have plenty of insight and advice to share, including the challenges that startup founders will face as they break into the transportation arena.

You’ll hear from CEOs like Starship Technologies’ Ahti Heinla. The company’s been busy testing delivery robots in real-world markets. Don’t miss his discussion touching on challenges ranging from technology to red tape and what it might take to make last-mile robotic delivery a mainstream reality.

Grab your early bird pass today and save $100 on tickets before prices go up in less than a month.

#addepar, #alexandr-wang, #articles, #artificial-intelligence, #autotech-ventures, #clara-brenner, #deliv, #economy, #entrepreneurship, #executive, #general-motors, #greenoaks-capital, #micromobility, #mit, #nuro, #nvidia, #quora, #rachel-holt, #san-francisco, #scale-ai, #starship-technologies, #startup-company, #tc-sessions-mobility, #technology, #tiger-global, #transportation, #urban-innovation-fund, #venture-capital, #wang

Huawei is not a carmaker. It wants to be the Bosch of China

One after another, Chinese tech giants have announced their plans for the auto space over the last few months. Some internet companies, like search engine provider Baidu, decided to recruit help from a traditional carmaker to produce cars. Xiaomi, which makes its own smartphones but has stressed for years it’s a light-asset firm making money from software services, also jumped on the automaking bandwagon. Industry observers are now speculating who will be the next. Huawei naturally comes to their minds.

Huawei seems well-suited for building cars — at least more qualified than some of the pure internet firms — thanks to its history in manufacturing and supply chain management, brand recognition, and vast retail network. But the telecom equipment and smartphone maker repeatedly denied reports claiming it was launching a car brand. Instead, it says its role is to be a Tier 1 supplier for automakers or OEMs (original equipment manufacturers).

Huawei is not a carmaker, the company’s rotating chairman Eric Xu reiterated recently at the firm’s annual analyst conference in Shenzhen.

“Since 2012, I have personally engaged with the chairmen and CEOs of all major car OEMs in China as well as executives of German and Japanese automakers. During this process, I found that the automotive industry needs Huawei. It doesn’t need the Huawei brand, but instead, it needs our ICT [information and communication technology] expertise to help build future-oriented vehicles,” said Xu, who said the strategy has not changed since it was incepted in 2018.

There are three major roles in auto production: branded vehicle manufacturers like Audi, Honda, Tesla, and soon Apple; Tier 1 companies that supply car parts and systems directly to carmakers, including established ones like Bosch and Continental, and now Huawei; and lastly, chip suppliers including Nvidia, Intel and NXP, whose role is increasingly crucial as industry players make strides toward highly automated vehicles. Huawei also makes in-house car chips.

“Huawei wants to be the next-generation Bosch,” an executive from a Chinese robotaxi startup told TechCrunch, asking not to be named.

Huawei makes its position as a Tier 1 supplier unequivocal. So far it has secured three major customers: BAIC, Chang’an Automobile, and Guangzhou Automobile Group.

“We won’t have too many of these types of in-depth collaboration,” Xu assured.

L4 autonomy?

Arcfox, a new electric passenger car brand under state-owned carmaker BAIC, debuted its Alpha S model quipped with Huawei’s “HI” systems, short for Huawei Inside (not unlike “Powered by Intel”), during the annual Shanghai auto show on Saturday. The electric sedan, priced between 388,900 yuan and 429,900 yuan (about $60,000 and $66,000), comes with Huawei functions including an operating system driven by Huawei’s Kirin chip, a range of apps that run on HarmonyOS, automated driving, fast charging, and cloud computing.

Perhaps most eye-catching is that Alpha S has achieved Level 4 capabilities, which Huawei confirmed with TechCrunch.

That’s a bold statement, for it means that the car will not require human intervention in most scenarios, that is, drivers can take their hands off the wheels and nap.

There are some nuances to this claim, though. In a recent interview, Su Qing, general manager for autonomous driving at Huawei, said Alpha S is L4 in terms of “experience” but L2 according to “legal” responsibilities. China has only permitted a small number of companies to test autonomous vehicles without safety drivers in restricted areas and is far from letting consumer-grade driverless cars roam urban roads.

As it turned out, Huawei’s “L4” functions were shown during a demo, during which the Arcfox car traveled for 1,000 kilometers in a busy Chinese city without human intervention, though a safety driver was present in the driving seat. Automating the car is a stack of sensors, including three lidars, six millimeter-wave radars, 13 ultrasonic radars and 12 cameras, as well as Huawei’s own chipset for automated driving.

“This would be much better than Tesla,” Xu said of the car’s capabilities.

But some argue the Huawei-powered vehicle isn’t L4 by strict definition. The debate seems to be a matter of semantics.

“Our cars you see today are already L4, but I can assure you, I dare not let the driver leave the car,” Su said. “Before you achieve really big MPI [miles per intervention] numbers, don’t even mention L4. It’s all just demos.”

“It’s not L4 if you can’t remove the safety driver,” the executive from the robotaxi company argued. “A demo can be done easily, but removing the driver is very difficult.”

“This technology that Huawei claims is different from L4 autonomous driving,” said a director working for another Chinese autonomous vehicle startup. “The current challenge for L4 is not whether it can be driverless but how to be driverless at all times.”

L4 or not, Huawei is certainly willing to splurge on the future of driving. This year, the firm is on track to spend $1 billion on smart vehicle components and tech, Xu said at the analyst event.

A 5G future

Many believe 5G will play a key role in accelerating the development of driverless vehicles. Huawei, the world’s biggest telecom equipment maker, would have a lot to reap from 5G rollouts across the globe, but Xu argued the next-gen wireless technology isn’t a necessity for self-driving vehicles.

“To make autonomous driving a reality, the vehicles themselves have to be autonomous. That means a vehicle can drive autonomously without external support,” said the executive.

“Completely relying on 5G or 5.5G for autonomous driving will inevitably cause problems. What if a 5G site goes wrong? That would raise a very high bar for mobile network operators. They would have to ensure their networks cover every corner, don’t go wrong in any circumstances and have high levels of resilience. I think that’s simply an unrealistic expectation.”

Huawei may be happy enough as a Tier 1 supplier if it ends up taking over Bosch’s market. Many Chinese companies are shifting away from Western tech suppliers towards homegrown options in anticipation of future sanctions or simply to seek cheaper alternatives that are just as robust. Arcfox is just the beginning of Huawei’s car ambitions.

#apple, #artificial-intelligence, #asia, #audi, #automotive, #bosch, #china, #continental, #eric-xu, #harmony, #harmonyos, #honda, #huawei, #intel, #nvidia, #nxp, #operating-system, #shanghai, #shenzhen, #supply-chain-management, #tc, #tesla, #transportation, #wireless-technology, #xiaomi