A professor explains why he is allowing students to incorporate ChatGPT into their writing process instead of banning the new technology
Tag Archives: computing
New Exascale Supercomputer Can Do a Quintillion Calculations a Second
New “exascale” supercomputers will bring breakthroughs in science. But the technology also exists to study nuclear weapons
Discovery Helps Computers Draw Intricate Two-Dimensional Animations
A new algorithm solves the long-standing “hidden line problem” of computer graphics
The Leap Second’s Time Is Up: World Votes to Stop Pausing Clocks
How, and whether, to keep atomic time in sync with Earth’s rotation is still up for debate
A Massive LinkedIn Study Reveals Who Actually Helps You Get That Job
Acquaintances, more than close friends, show the strength of “weak ties” when it comes to employment
Making Computer Chips Act More like Brain Cells
Flexible organic circuits that mimic biological neurons could increase processing speed and might someday hook right into your head
Nearly $53 Billion in Federal Funding Could Revive the U.S. Computer Chip Industry
The CHIPS and Science Act aims to support domestic semiconductor production, new high-tech jobs and scientific research—even NASA
The Quest for a ‘Tick Map’
The Weather Myth: Lost Women of Science Podcast, Season 2, Bonus Episode
We Shouldn’t Try to Make Conscious Software–Until We Should
Lost Women of Science Podcast, Season 2, Episode 5: La Jolla
It’s Time to Open the Black Box of Social Media
Lost Women of Science Podcast, Season 2, Episode 4: Netherworld
How to Fix Quantum Computing Bugs
Lost Women of Science Podcast, Season 2, Episode 3: The Experimental Rabbit
Love Computers? Love History? Listen to This Podcast
Lost Women of Science Podcast, Season 2, Episode 2: Women Needed
Lost Women of Science Podcast, Season 2: Episode One – The Grasshopper
‘Momentum Computing’ Pushes Technology’s Thermodynamic Limits
Russian Misinformation Seeks to Confound, Not Convince
Where Is Russia’s Cyberwar? Researchers Decipher Its Strategy
The Staggering Ecological Impacts of Computation and the Cloud
The Metaverse Is Coming; We May Already Be in It
AI Outraces Human Champs at the Video Game Gran Turismo
Cyberattack Misinformation Could Be Plan for Ukraine Invasion
5G Devices Are about to Change Your Life
Medical Algorithms Need Better Regulation
Digital Access Is Not Universal, but a 10-Year Plan Can Help
A Portable MRI Makes Imaging More Democratic
The Log4J Software Flaw Is ‘Christmas Come Early’ for Cybercriminals
What is ‘The Cloud’ and How Does it Pervade Our Lives?
Should Big Tech’s Plan for a Metaverse Scare Us?
The FDA Should Better Regulate Medical Algorithms
An Unsung Female Pioneer of Computer Simulation
The iPhone 13 Pro goes to Disneyland
This year’s iPhone review goes back to Disneyland for the first time in a couple of years for, uh, obvious reasons. I’m happy to report that the iPhone 13 Pro and iPhone 13 performed extremely well and the limited testing I was able to do on the iPhone mini and iPhone 13 Pro Max showed that for the first time you’re able to make a pretty easy choice based on size once you’ve decided you’re ok without telephoto.
One of the major reasons I keep bringing these iPhones back to Disneyland is that it’s pretty much the perfect place to test the improvements Apple claims it is making in an intense real-world setting. It’s typically hot, the network environment is atrocious, you have to use your phone for almost everything these days from pictures to ticket scanning to food ordering and you’re usually there as long as you can to get the most out of your buck. It’s the ideal stress test that doesn’t involve artificial battery rundowns or controlled photo environments.
In my testing, most of Apple’s improvements actually had a visible impact on the quality of life of my trip, though in some cases not massive. Screen brightness, the longer telephoto and battery life were all bright spots.
Performance and battery
The battery of the iPhone 13 Pro hit just over the 13 hour mark in the parks for me running it right to the dregs. Since there was so much video testing this year, the camera app did stay on screen longer than usual at just over 1hr of active ‘on screen’ usage which does put a bit of a strain on the system. I’d say that in real-world standard use you’ll probably get a bit more than that out of it so I’m comfortable saying that Apple’s estimate of an hour or more longer video playback time from the iPhone 12 Pro is probably pretty accurate.
Though it was hard to get the same level of stress on the iPhone 13 Pro Max during my tests, I’d say you can expect even more battery life out of it, given the surplus it still had when my iPhone 13 Pro needed charging. Bigger battery, more battery life, not a big shock.
If you’re using it in the parks and doing the rope drop I’d say I would plan on taking it off the charger at 6am or so and plan to have a charger handy by about 4pm so you don’t go dead. That’s not a bad run overall for an iPhone in challenging conditions and with heavy camera use.
Apple’s new ProMotion display was a nice upgrade as well, and I did notice the increased screen brightness. Typically the bump in brightness was only truly noticeable side-by-side with an iPhone 12 Pro with high-key content displayed on the screen. Popping open the Disneyland app for the barcode meant a bit better consistency in scanning (though that’s pretty hard to say for sure) and a visual increase in overall brightness in direct sun. Out of the Sun I’d say you’d be hard pressed to tell.
The variable refresh rate of the ProMotion screen cranking all the way up to 120hz while scrolling Safari is a really nice quality of life improvement. I’m unfortunately a bit jaded in this department because I’ve done a ton of my computing on the iPad Pro for the past couple of years, but it’s going to be an amazing bump for iPhone users that haven’t experienced it. Because Apple’s system is not locked at 120hz, it allows them to conserve battery life by slowing down the screen’s refresh rate when viewing static content like photos or text when not scrolling. I’m happy to say that I did not see any significant ramping while scrolling, so it’s really responsive and seamless in its handling of this variability.
The new A15 chip is, yes, more powerful than last year. Here’s some numbers if that’s your sort of thing:
Impressive as hell, especially for more battery life not less. The power-per-watt performance of Apple’s devices continues to be the (relatively) un-sung victory of its chips department. It’s not just that this year’s iPhones or the M1 laptops are crazy fast, it’s that they’re also actually usable for enormous amounts of time not connected to a charger. For those curious, the iPhone 12 Pro appears to have 6GB of RAM.
Design
The design of the iPhone continues to be driven by the camera and radio. Whatever is necessary to support the sensors and lenses of the camera package and whatever is necessary to ensure that the antennas can accommodate 5G are in control of the wheel at this point in the iPhone’s life, and that’s pretty natural.
The camera array on the back of the iPhone 13 Pro is bigger and taller in order to accommodate the three new cameras Apple has installed here. And I do mean bigger, like 40% bigger overall with taller arrays. Apple’s new cases now have a very noticeable raised ridge that exists to protect the lenses when you’re setting the case down on a surface.
Everything else is sort of built around the camera and the need for wireless charging and radio performance. But Apple’s frosted glass and steel rim look retains its jewel-like quality this year and they’re still really good looking phones. I doubt the vast majority of people will see them long without a case but while you do they’re nice looking phones.
The front notch has been pared down slightly due to improvements in camera packaging, which leaves a tiny bit more screen real-estate for things like videos, but we’ll have to wait to see if developers find clever ways to use the extra pixels.
Now, on to the cameras.
Cameras
It seems impossible that Apple continues to make year-over-year improvements that genuinely improve your optionality and quality of images that are enough to matter. And yet. The camera quality and features are a very real jump from the iPhone 11 Pro across the board and still a noticeable improvement from the iPhone 12 Pro for you early adopters. Anything older and you’re going to get a blast of quality right to the face that you’re going to love.
The camera packaging and feature set is also more uniform across the lineup than ever before with Apple’s IBIS in camera sensor shift stabilization system appearing in every model — even the iPhone 13 mini which is a crazy achievement given the overall package size of this sensor array.
In my experience in the parks this year, Apple’s improvements to cameras made for a material difference no matter which lens I chose. From low light to long zoom, there’s something to love here for every avid photographer. Oh, and that Cinematic Mode, we’ll talk about that too.
Telephoto
Of all of the lenses I expected improvement from, the telephoto was actually not that high on my list. But I was pleasantly surprised by the increased range and utility of this lens. I am an admitted telephoto addict, with some 60% of my photos on iPhone 12 Pro taken with the tele lens over the wide. I just prefer the ability to pick and choose my framing more closely without having to crop after the fact.
Having Night Mode on the telephoto now means that it doesn’t fall back to the wide lens with crop in dark conditions as it used to. Now you get that native telephoto optics plus the Night Mode magic. This means much better black points and great overall exposure even hand held at zoom — something that felt just completely out of reach a couple of years ago.
With the higher zoom level, portraits are cropped tighter, with better organic non-portrait-mode bokeh which is lovely. With this new lens you’re going to be able to shoot better looking images of people, period.
If you’re a camera person, the 3x reminds me a lot of my favorite 105mm fixed portrait lens. It’s got the crop, it’s got the nice background separation and the optical quality is very, very good on this lens package. Apple knocked it out of the park on the tele this time around.
The longer optical range was also very handy in a Disneyland world where performers are often kept separate from guests — sometimes for effect but mostly because of pandemic precautions. Being able to reach out and get that shot of Kylo Ren hyping up the crowd was a fun thing to be enabled to do.
Wide
Apple’s wide lens gets the biggest overall jump in sensor technology. A larger ƒ/1.5 aperture and new 1.9µm pixels roughly doubles the light gathering — and it shows. Images at night and inside ride buildings had a marked improvement in overall quality due to deeper blacks and better dynamic range.
With Night Mode enabled, the deeper light gathering range and improved Smart HDR 4 makes for deeper blacks and a less washed out appearance. If I had to characterize it, it would be ‘more natural’ overall — a theme I’ve seen play out across the iPhone cameras this time around.
Without Night Mode enabled, the raw improvement in image quality due to more light being captured is immediately evident. Though I think there are few situations where you need to turn off Night Mode any more, subjects in motion in low light are one of those and you’ll get a few inches extra of wiggle room with this new sensor and lens combo in those instances.
Having sensor shift OIS come to the wide on the iPhone 13 across the range is a huge godsend to both still shots and video. Though I’m spoiled having been able to play with the iPhone 12 Pro Max’s stabilization, if you haven’t shot with it before you’re going to be incredibly happy with the additional levels of sharpness it brings.
Ultra Wide
Apple’s ultra wide camera has been in need of some love for a while. Though it offered a nice additional perspective, it has suffered from a lack of auto-focus and sub-par light gathering ability since its release. This time around it gets both a larger ƒ/1.8 aperture and autofocus. Apple claims 92% more light gathering and my testing in pretty rough lighting conditions shows a massive improvement across the board.
Typically at Disneyland I like to shoot the wide in one of two ways: up close to create a fisheye-type perspective for portraits or to snag a vista when the lighting or scene setting is especially good. Having auto focus available improves the first a ton and the wider aperture gives the second a big boost too.
Check out these shots of a moonlit Trader Sam’s, a snap that you might grab because the lighting and scenery are just right. The iPhone 12 Pro isn’t bad at all here but there is an actually quite clear difference between the two in exposure. Both of these were taken with Night Mode disabled in order to compare the raw improvement in aperture.
The delta is clear, and I’m pretty impressed in general with how much Apple keeps improving this ultra wide camera, though it seems clear at this point that we’re hitting the upper limits of what a 12MP sensor at this size can bring to a lens with such a wide POV.
The new ISP also improves Night Mode shooting here too — and with a bit more raw range to work with given the wider aperture, your night mode shots lose even more of that bright candy-like look and get a deeper and more organic feeling.
Macro photos and video
Another new shooting possibility presented by the iPhone 13 Pro is a pretty impressive macro mode that can shoot as close as 2cm. It’s really, really well done given that it’s being implemented in a super wide lens on a smartphone.
I was able to shoot incredibly detailed snaps very, very close-up. We’re talking ‘the surface texture of objects’ close; ‘pollen hanging off a bee’s thorax’ close; dew…well you get the idea. It’s close, and it’s a nice tool to have without having to carry a macro attachment with you.
I found the sharpness and clarity of the macro images I captured to be excellent within the rough 40% area that comprised the center of the capture area. Due to the fact that the macro mode is on the ultra wide, there is a significant amount of comatic aberration around the edges of the image. Basically, the lens is so curved you get a bit of separation between wavelengths of light coming in at oblique angles, leading to a rainbow effect. This is only truly visible at very close distances at the minimum of the focal range. If you’re a few cm away you’ll notice and you’ll probably crop it out or live with it. If you’re further away getting a ‘medium macro’ at 10cm or whatever you’ll likely not notice it much.
This is a separate factor from the extremely slim field-of-focus that is absolutely standard with all macro lenses. You’re going to have to be precise at maximum macro, basically, but that’s nothing new.
Given how large scale Disneyland is I actually had to actively seek out ways to use the macro, though I’d imagine it would be useful in more ways in other venues. But I still got cool shots of textures in the bottles in Radiator Springs and some faux fungi at Galaxy’s Edge.
Macro video is similarly fun but requires extremely stable hands or a tripod to really take advantage of given that the slightest movement of your hands is going to move the camera a massive amount of distance proportional to the focal area. Basically, tiny hand moves, big camera moves in this mode. But it’s a super fun tool to add to your arsenal and I had fun chasing bugs around some flower petals in the garden of the Grand Californian hotel with it.
As a way to go from world scale down to fine detail it’s a great way to mix up your shots.
One interesting quirk of the ultra wide camera being the home of macro on iPhone 13 Pro is that there is a noticeable transition between the wide and ultra-wide cameras as you move into macro range. This presents as a quick-shift image transition where you can see one camera clicking off and the other one turning on — something that was pretty much never obvious in other scenarios even though the cameras switch all the time depending on lighting conditions and imaging judgement calls made by the iPhone’s camera stack.
Users typically never notice this at all, but given that there is now an official macro camera available when you swoop in close to an object while you’re on 1x then it’s going to flip over to the .5x mode in order to let you shoot super close. This is all totally fine, by the way, but can result in a bit of flutter if you’re moving in and out of range with the cameras continuously switching as you enter and exit ‘macro distance’ (around 10-15cm).
When I queried about this camera switching behavior, Apple said that “a new setting will be added in a software update this fall to turn off automatic camera switching when shooting at close distances for macro photography and video.”
This should solve this relatively small quirk for people who want to work specifically at the macro range.
Photographic Styles and Smart HDR 4
One of the constant tensions with Apple’s approach to computational photography has been its general leaning towards the conservative when it comes to highly processed images. Simply put, Apple likes its images to look ‘natural’, where other similar systems from competitors like Google or Samsung have made different choices in order to differentiate and create ‘punchier’ and sometimes just generally brighter images.
I did some comparisons of these approaches back when Apple introduced Night Mode two years ago.
The general idea hasn’t changed much even with Apple’s new launches this year, they’re still hewing to nature as a guiding principle. But now they’ve introduced Photographic Styles in order to give you the option of cranking two controls they’re calling Tone and Warmth. These are basically vibrance and color temperature (but only generally). You can choose from 5 presets including no adjustments or you can adjust the two settings on any of the presets on a scale of -100 to +100.
I would assume that long term people will play with these and recommendations will get passed around on how to get a certain look. My general favorite of these is vibrant because I like the open shadows and mid-tone pop. Though I would assume a lot of folks will gravitate towards Rich Contrast because more contrast is generally more pleasing to the human eye.
In this shot of some kid-sized speeders, you can see the effects on the shadows and midtones as well as the overall color temperature. Rather than being a situational filter, I view this as a deep ‘camera setting’ feature, much like choosing the type of film that you wanted to roll with in a film camera. For more contrast you might choose a Kodak Ektachrome, for cooler-to-neutral colors perhaps a Fuji, for warm skin tones perhaps a Kodak Portra and for boosted color maybe an Ultramax.
This setting gives you the option to set up your camera the way you want the color to sit in a similar way. The setting is then retained when you close camera.app. This way when you open it, it’s set to shoot the way you want it to. This goes for the vast majority of camera settings now under iOS 15, which is a nice quality of life improvement over the old days when the iPhone camera reset itself every time you opened it.
It’s worth noting that these color settings are ‘imbedded’ in the image, which means they are not adjustable afterwards like Portrait Mode’s lighting scenarios. They are also not enabled during RAW — which makes sense.
Smart HDR4 also deserves a mention here because it’s now doing an additional bit of smart segmentation based on subjects in the frame. In a situation with a backlit group of people, for instance, the new ISP is going to segment out each of those subjects individually and apply color profiles, exposure, white balance and other adjustments to them — all in real time. This makes for a marked improvement in dark-to-light scenarios like shooting out of windows and shooting into the sun.
I would not expect much improvement out of the selfie camera this year, it’s just much the same as normal. Though you can use Cinematic Mode on it which is fun if not that useful in selfie modes.
Cinematic Mode
This is an experimental mode that has been shipped live to the public. That’s the best way to set the scene for those folks looking to dive into it. Contrary to Apple’s general marketing, this won’t yet replace any real camera rack focus setup on a film set, but it does open up a huge toolset for budding filmmakers and casual users that was previously locked behind a lot of doors made up of cameras, lenses and equipment.
Cinematic Mode uses the camera’s depth information, the accelerometer and other signals to craft a video that injects synthetic bokeh (blur) and tracks subjects in the frame to intelligently ‘rack’ focus between them depending on what it thinks you want. There is also some impressive focus tracking features built in that allow you to lock onto a subject and follow them in a ‘tracking shot’ which can keep them in focus through obstacles like crowds, railings and water. I found all of these depth-leveraging features that did tracking to be incredibly impressive in my early testing, but they were often let down a bit by the segmentation masking that struggled to define crisp, clear borders around subjects to separate them from the background. It turns out that doing what portrait mode does with a still image is just insanely hard to do 30 times a second with complex, confusing backgrounds.
The feature is locked to 1080p/30fps which says a lot about its intended use. This is for family shots presented on the device, AirPlayed to your TV or posted on the web. I’d imagine that this will actually get huge uptake with the TikTok filmmaker crowd who will do cool stuff with the new storytelling tools of selective focus.
I did some test shooting with my kids walking through crowds and riding on carousels that was genuinely, shockingly good. It really does provide a filmic, dreamy quality to the video that I was previously only able to get with quick and continuous focus adjustments on an SLR shooting video with a manually focused lens.
That, I think, is the major key to understanding Cinematic Mode. Despite the marketing, this mode is intended to unlock new creative possibilities for the vast majority of iPhone users who have no idea how to set focal distances, bend their knees to stabilize and crouch-walk-rack-focus their way to these kinds of tracking shots. It really does open up a big bucket that was just inaccessible before. And in many cases I think that those willing to experiment and deal with its near-term foibles will be rewarded with some great looking shots to add to their iPhone memories widget.
I’ll be writing more about this feature later this week so stay tuned. For now, what you need to know is that an average person can whip this out in bright light and get some pretty fun and impressive results, but it is not a serious professional tool, yet. And even if you miss focus on a particular subject you are able to adjust that in post with a quick tap of the edit button and a tap on a subject — as long as it’s within the focal range of the lens.
As a filmmaking tool for the run and gun generation it’s a pretty compelling concept. The fact is that it allows people to spend less time and less technical energy on the mechanics of filmmaking and more time on the storytelling part. Moviemaking has always been an art that is intertwined with technology — and one of the true exemplars of the ideal that artists are always the first to adopt new technology and push it to its early limits.
Just as Apple’s portrait mode has improved massively over the past 6 years, I expect Cinematic Mode to keep growing and improving. The relatively sketchy performance in low light and the locked zoom are high on my list to see bumps next year, as is improved segmentation. It’s an impressive technical feat that Apple is able to deliver this kind of slicing and adjustment not only in real-time preview but also in post-shooting editing modes, and I’m looking forward to seeing it evolve.
Assessment
This is a great update that improves user experience in every way, even during an intense day-long Disneyland outing. The improved brightness and screen refresh means easier navigation of park systems and better visibility in daylight for directions and wait times and more. The better cameras mean you’re getting improved shots in dark-to-light situations like waiting in lines or shooting from under overhangs. The nice new telephoto lets you shoot close-up shots of cast members who are now often separated from the crowds by large distances, which is cool — and as a bonus acts as a really lovely portrait lens even while not in Portrait mode.
Overall this was one of the best experiences I’ve had testing a phone at the parks, with a continuous series of ‘wow’ moments with the cameras that sort of made me question my confirmation bias. I ended up with a lot of shots like the night mode wide angle and telephoto ones I shared above that impressed me so much I ended up doing a lot of gut checking asking other people in blind tests what they thought of the two images. Each time I did so the clear winner was the iPhone 13 — it really is just a clear cut improvement in image making across the board.
The rest of the package is pretty well turned out here too, with massive performance gains in the A15 Bionic with not only no discernable impact on battery life but a good extra hour to boot. The performance chart above may give the wow factor but that performance charted on the power usage of the chip across a day is what continues to be the most impressive feat of Apple’s chip teams.
The iPhones 13 are an impressive field this year, providing a solid moat of image quality, battery life and now, thankfully, screen improvements that should serve Apple well over the next 12 months.
Tumblr’s subscription product Post+ enters open beta after much scrutiny from users
Tumblr is entering open beta for its subscription product Post+, meaning that all U.S. users can now try out the monetization feature. The product launched in closed beta in July, allowing users hand-picked by Tumblr to place some of their content behind a monthly paywall. This marked the first time that Tumblr allowed bloggers to monetize their content directly on the platform, but the feature was met with backlash from users who worried about how the feature would change the site’s culture.
Now, Tumblr has responded to user feedback by removing the blue Post+ badge that appeared next to the names of users who enabled the feature. Tumblr differentiates itself from other sites by not revealing users’ follower and following counts, so users were concerned that this distinction, which looked like a Twitter verification badge, contradicted that key aspect of Tumblr culture. Tumblr is also adding a $1.99/month price point in open beta — before, subscriber-only content could be priced at $3.99, $5.99, and $9.99. Tumblr will only take 5% of creator profits — comparatively, Patreon takes between 5% and 12% depending on the tier. Payments will be processed through Stripe.
Still, Tumblr users were dismayed by the way Post+ was rolled out. Many bloggers were concerned that in the closed beta, Post+ users didn’t have the ability to block paying subscribers without first contacting support — this could potentially expose users to harassment without the tools to manage it. Tumblr corrected that mistake in the open beta, so now, users can block subscribers themselves. Creators can also put existing content behind the Post+ paywall.
Some users upset with the Post+ rollout staged a protest, which — with over 98,000 notes — is the first thing that shows up when you search “post plus” on Tumblr. Many people on Tumblr have amassed followings by posting iterative fan content, like fanfiction. Tumblr cited fanfiction as an example of the kind of content that creators can put behind a paywall, but users remain concerned that they will be subject to legal action if they were to do so. Archive of Our Own, a major fanfiction site, prohibits its users from linking to sites like Patreon or Ko-Fi, since some intellectual property rights holders can be litigious about the monetization of fanfiction. While it’s considered fair use to make fan content, profiting from it can be considered a violation of copyright.
When Tumblr banned pornographic content in 2018, monthly page views decreased by 29% — to date, the blogging platform hasn’t regained that traffic. After being sold to Automattic in 2019, Tumblr has committed to capturing the attention of Gen Z audiences, who the platform says make up about 48% of its users. Tumblr says it’s catering Post+ to serve Gen Z audiences, but the results of the open beta will begin to reveal whether or not this is what users on the platform want.
Stairwell secures $20M Series A to help organizations outsmart attackers
Back when Stairwell emerged from stealth in 2020, the startup was shrouded in secrecy. Now with $20 million in Series A funding, its founder and CEO Mike Wiacek — who previously served as chief security officer at Chronicle, Google’s moonshot cybersecurity company — is ready to talk.
As well as raising $20M, an investment round co-led by Sequoia Capital and Accel, Stairwell is launching Inception, a threat hunting platform that aims to help organizations determine if they were compromised now or in the past. Unlike other threat detection platforms, Inception takes an “inside out” approach to cybersecurity, which starts by looking inwards at a company’s data.
“This helps you study what’s in your environment first before you start thinking about what’s happening in the outside world,” Wiacek tells TechCrunch. “The beautiful thing about that approach is that’s not information that outside parties, a.k.a. the bad guys, are privy to.”
This data, all of which is treated as suspicious, is continuously evaluated in light of new indicators and new threat intelligence. Stairwell claims this enables organizations to detect anomalies within just days, rather than the industry average of 280 days, as well as to “bootstrap” future detections.
“If you go and buy a threat intelligence feed from Vendor X, do you really think that someone who’s spending hundreds of thousands, or even millions of dollars to conduct an offensive campaign isn’t going to make sure that whatever they’re using isn’t in that field?,” said Wiacek. “They know what McAfee knows and they know other antivirus engines know, but they don’t know what you know and that’s a very powerful advantage that you have there.”
Stairwell’s $20 million in Series A funding, which comes less than 12 months after it secured $4.5 million in seed funding, will be used to further advance the Inception platform and to increase the startup’s headcount; the Palo Alto-based firm currently has a modest headcount of 21.
The Inception platform, which the startup claims finally enables enterprises to “outsmart the bad guys”, is launching in early release for a limited number of customers, with full general availability scheduled for 2022.
“I just wish we had a product to market when SolarWinds happened,” Wiacek added.
Study finds half of Americans get news on social media, but percentage has dropped
A new report from Pew Research finds that around a third of U.S. adults continue to get their news regularly from Facebook, though the exact percentage has slipped from 36% in 2020 to 31% in 2021. This drop reflects an overall slight decline in the number of Americans who say they get their news from any social media platform — a percentage that also fell by 5 percentage points year-over-year, going from 53% in 2020 to a little under 48%, Pew’s study found.
By definition, “regularly” here means the survey respondents said they get their news either “often” or “sometimes,” as opposed to “rarely,” “never,” or “don’t get digital news.”
The change comes at a time when tech companies have come under heavy scrutiny for allowing misinformation to spread across their platforms, Pew notes. That criticism has ramped up over the course of the pandemic, leading to vaccine hesitancy and refusal, which in turn has led to worsened health outcomes for many Americans who consumed the misleading information.
Despite these issues, the percentage of Americans who regularly get their news from various social media sites hasn’t changed too much over the past year, demonstrating how much a part of people’s daily news habits these sites have become.

Image Credits: Pew Research
In addition to the one-third of U.S. adults who regularly get their news on Facebook, 22% say they regularly get news on YouTube. Twitter and Instagram are regular news sources for 13% and 11% of Americans, respectively.
However, many of the sites have seen small declines as a regular source of news among their own users, says Pew. This is a different measurement compared with the much smaller percentage of U.S. adults who use the sites for news, as it speaks to how the sites’ own user bases may perceive them. In a way, it’s a measurement of the shifting news consumption behaviors of the often younger social media user, more specifically.
Today, 55% of Twitter users regularly get news from its platform, compared with 59% last year. Meanwhile, Reddit users’ use of the site for news dropped from 42% to 39% in 2021. YouTube fell from 32% to 30%, and Snapchat fell from 19% to 16%. Instagram is roughly the same, at 28% in 2020 to 27% in 2021.
Only one social media platform grew as a news source during this time: TikTok.
In 2020, 22% of the short-form video platform’s users said they regularly got their news there, compared with an increased 29% in 2021.
Overall, though, most of these sites have very little traction with the wider adult population in the U.S. Fewer than 1 in 10 Americans regularly get their news from Reddit (7%), TikTok (6%), LinkedIn (4%), Snapchat (4%), WhatsApp (3%) or Twitch (1%).

Image Credits: Pew Research
There are demographic differences between who uses which sites, as well.
White adults tend to turn to Facebook and Reddit for news (60% and 54%, respectively). Black and Hispanic adults make up significant proportions of the regular news consumers on Instagram (20% and 33%, respectively.) Younger adults tend to turn to Snapchat and TikTok, while the majority of news consumers on LinkedIn have four-year college degrees.
Of course, Pew’s latest survey, conducted from July 26 to Aug. 8, 2021, is based on self-reported data. That means people’s answers are based on how the users perceive their own usage of these various sites for newsgathering. This can produce different results compared with real-world measurements of how often users visited the sites to read news. Some users may underestimate their usage and others may overestimate it.
People may also not fully understand the ramifications of reading news on social media, where headlines and posts are often molded into inflammatory clickbait in order to entice engagement in the form of reactions and comments. This, in turn, may encourage strong reactions — but not necessarily from those worth listening to. In recent Pew studies, it found that social media news consumers tended to be less knowledgeable about the facts on key news topics, like elections or Covid-19. And social media consumers were more frequently exposed to fringe conspiracies (which is pretty apparent to anyone reading the comments!)
For the current study, the full sample size was 11,178 respondents, and the margin of sampling error was plus or minus 1.4 percentage points.
Get the early-bird price on group discount passes to TC Sessions: SaaS 2021
September arrived in the blink of an eye, and October 27 — TC Sessions: SaaS 2021 to be precise — is hot on its heels. Now’s the time to gather your team and strategize how you’ll cover the day-long event to make as many connections and discover as many opportunities as possible.
Step 1: Take advantage of the early-bird group discount. When you buy passes for four or more people, you pay just $45 each — that’s $30 off each pass. Sweet!
Don’t dilly-dally or shillyshally. The early-bird price expires on October 1 at 11:59 pm (PT).
Your pass may be discounted, but you’ll get the full TC Session experience — all the speakers, demos, networking and breakout sessions. Plus, video-on-demand means you can catch up on anything you miss later, when it fits your schedule.
Check out the event agenda or read more about just some of the many people and companies coming to TC Sessions: SaaS. Note: We’re adding new speakers and presentations to the event agenda every week, and you can sign up here for updates.
As a team, you can cover more ground. Tune in to a main stage panel discussion while one colleague dives into a breakout session and another sets up a 1:1 product demo or taps CrunchMatch to connect with potential investors. You might go faster alone, but you’ll go further together.
You can bet the industry’s top experts will be in the virtual house covering both the benefits and challenges of SaaS: the Next Generation. Here are just two examples.
SaaS Security, Today and Tomorrow: Enterprises face a constant stream of threats, from nation states to cybercriminals and corporate insiders. After a year where billions worked from home and the cloud reigned supreme, startups and corporations alike can’t afford to stay off the security pulse. Edna Conway, vice president, chief security & risk officer, Azure, Microsoft and Olivia Rose CISO, VP of IT & security, Amplitude discuss what SaaS startups need to know about security now, and in the future.
How Startups are Turning Data into Software Gold: The era of big data is behind us. Today’s leading SaaS startups are working with data instead of merely fighting to help customers collect information. Jenn Knight (AgentSync), Barr Moses (Monte Carlo) and Dan Wright (DataRobot), leaders of three data-focused startups, will discuss how today’s SaaS companies leverage data to build new companies, attack new problems and, of course, scale like mad.
TC Sessions: SaaS 2021 takes place on October 27. Don’t wait — the early-bird price on the group discount offer expires October 1 at 11:59 pm (PT).
Is your company interested in sponsoring or exhibiting at TC Sessions: SaaS 2021? Contact our sponsorship sales team by filling out this form.
F5 acquires cloud security startup Threat Stack for $68 million
Applications networking company F5 has announced it’s acquiring Threat Stack, a Boston-based cloud security and compliance startup, for $68 million.
The deal, which comes months after F5 bought multi-cloud management startup Volterra for $500 million, sees the 25-year-old company looking to bolster its cloud security portfolio as applications become a growing focus for cybercriminals. Businesses lose more than $100 billion a year to attacks targeting digital experiences, F5 says and these experiences are increasingly powered by applications distributed across multiple environments and interconnected through APIs.
Threat Stack, which was founded in November 2012 and has since amassed more than $70 million across six funding rounds including a $45 million Series C round led by F-Prime Capital Partners and Eight Roads Ventures, specializes in cloud security for applications and provides customers with real-time threat detection for cloud infrastructure and workloads. Unlike many cloud security tools that kick in after an intrusion, Threat Stack takes a more proactive approach, alerting organizations to all known vulnerabilities and providing a report on the holes that need to be plugged.
The startup’s intrusion detection platform, the Threat Stack Cloud Security Platform, works across cloud, hybrid cloud, multi-cloud, and containerized environments, and is perhaps best known for its Slack integration that alerts DevOps teams to security concerns in real-time. Threat Stack has a number of big-name customers, according to its website, including Glassdoor, Ping Identity and Proofpoint.
F5 says that integrating its application and API protection solutions with Threat Stack’s cloud security capabilities and expertise will enhance visibility across application infrastructure and workloads, making it easier for customers to adopt consistent security in any cloud.
“Applications are the backbone of today’s modern businesses, and protecting them is mission-critical for our customers,” said Haiyan Song, EVP of Security at F5. “Threat Stack brings technology and talent that will strengthen F5’s security capabilities and further our adaptive applications vision with broader cloud observability and actionable security insights for customers.”
The acquisition, which is expected to close in F5’s first-quarter fiscal year 2022, is subject to closing conditions.
Roku debuts new Streaming Stick 4K bundles, software update with voice and mobile features
Weeks after Amazon introduced an updated Fire TV lineup that included, for the first time, its own TVs, Roku today is announcing its own competitive products in a race to capture consumers’ attention before the holiday shopping season. Its updates include a new Roku Streaming Stick 4K and Roku Streaming Stick 4K+ — the latter which ships with Roku’s newer hands-free voice remote. The company is also refreshing the Roku Ultra LT, a Walmart-exclusive version of its high-end player. And it announced the latest software update, Roku OS 10.5, which adds updated voice features, a new Live TV channel for home screens, and other minor changes.
The new Streaming Stick 4K builds on Roku’s four-year-old product, the Streaming Stick+, as it offers the same type of stick form factor designed to be hidden behind the TV set. This version, however, has a faster processor which allows the device to boot up to 30% faster and load channels more quickly, Roku claims. The Wi-Fi is also improved, offering faster speeds and smart algorithms that help make sure users get on the right band for the best performance in their homes where network congestion is an increasingly common problem — especially with the pandemic-induced remote work lifestyle. The new Stick adds support for Dolby Vision and HDR 10+, giving it the “4K” moniker.
This version ships with Roku’s standard voice remote for the same price of $49.99. For comparison, Amazon’s new Fire TV Stick Max with a faster processor and speedier Wi-Fi is $54.99. However, Amazon is touting the addition of Wi-Fi 6 and support for its game streaming service, Luna, as reasons to upgrade.
Roku’s new Streaming Stick 4K+ adds the Roku Voice Remote Pro to the bundle instead. This is Roku’s new remote, launched in the spring, that offers rechargeability, a lost remote finder, and hands-free voice support via its mid-field microphone, so you can just say things like “hey Roku, turn on the TV,” or “launch Netflix,” instead of pressing buttons. Bought separately, this remote is $29.99. The bundle sells for $69.99, which translates to a $10 discount over buying the stick and remote by themselves.

Image Credits: Roku
Both versions of the Streaming Stick will be sold online and in stores starting in October.
The Roku Ultra LT ($79.99), built for Walmart exclusively, has also been refreshed with a faster processor, more storage, a new Wi-Fi radio with up to 50% longer range, support for Dolby Vision, Bluetooth audio streaming, and a built-in ethernet port.
Plus, Roku notes that TCL will become the first device partner to use the reference designs it introduced at CES for wireless soundbars, with its upcoming Roku TV wireless soundbar. This device connects over Wi-Fi to the TV and works with the Roku remote, and will arrive at major retailers in October where it will sell for $179.99.
The other big news is Roku’s OS 10.5 software release. The update isn’t making any dramatic changes this time around, but is instead focused largely on voice and mobile improvements.
The most noticeable consumer-facing change is the ability to add a new Live TV channel to your home screen which lets you more easily launch The Roku Channel’s 200+ free live TV channels, instead of having to first visit Roku’s free streaming hub directly, then navigate to the Live TV section. This could make the Roku feel more like traditional TV for cord-cutters abandoning their TV guide for the first time.
Other tweaks include expanded support for launching channels using voice commands, with most now supported; new voice search and podcast playback with a more visual “music and podcast” row and Spotify as a launch partner; the ability to control sound settings in the mobile app; an added Voice Help guide in settings; and additional sound configuration options for Roku speakers and soundbars (e.g. using the speaker pairs and soundbar in a left/center/right) or in full 5.1 surround sound system).
A handy feature for entering in email and passwords in set-up screens using voice commands is new, too. Roku says it sends the voice data off-device to its speech-to-text partner, and the audio is anonymized. Roku doesn’t get the password or store it, as it goes directly to the channel partner. While there are always privacy concerns with voice data, the addition is a big perk from an accessibility standpoint.

Image Credits: Roku
One of the more under-the-radar, but potentially useful changes coming in OS 10.5 is an advanced A/V sync feature that lets you use the smartphone camera to help Roku make further refinements to the audio delay when using wireless headphones to listen to the TV. This feature is offered through the mobile app.
The Roku mobile app in the U.S. is also gaining another feature with the OS 10.5 update with the addition of a new Home tab for browsing collections of movies and shows across genres, and a “Save List, which functions as a way to bookmark shows or movies you might hear about — like when chatting with friends — and want to remember to watch later when you’re back home in front of the TV.
The software update will roll out to Roku devices over the weeks ahead. It typically comes to Roku players first, then rolls out to TVs.
Inside GitLab’s IPO filing
While the technology and business world worked towards the weekend, developer operations (DevOps) firm GitLab filed to go public. Before we get into our time off, we need to pause, digest the company’s S-1 filing, and come to some early conclusions.
GitLab competes with GitHub, which Microsoft purchased for $7.5 billion back in 2018.
The company is notable for its long-held, remote-first stance, and for being more public with its metrics than most unicorns — for some time, GitLab had a November 18, 2020 IPO target in its public plans, to pick an example. We also knew when it crossed the $100 million recurring revenue threshold.
Considering GitLab’s more recent results, a narrowing operating loss in the last two quarters is good news for the company.
The company’s IPO has therefore been long expected. In its last primary transaction, GitLab raised $286 million at a post-money valuation of $2.75 billion, per Pitchbook data. The same information source also notes that GitLab executed a secondary transaction earlier this year worth $195 million, which gave the company a $6 billion valuation.
Let’s parse GitLab’s growth rate, its final pre-IPO scale, its SaaS metrics, and then ask if we think it can surpass its most recent private-market price. Sound good? Let’s rock.
The GitLab S-1
GitLab intends to list on the Nasdaq under the symbol “GTLB.” Its IPO filing lists a placeholder $100 million raise estimate, though that figure will change when the company sets an initial price range for its shares. Its fiscal year ends January 31, meaning that its quarters are offset from traditional calendar periods by a single month.
Let’s start with the big numbers.
In its fiscal year ended January 2020, GitLab posted revenues of $81.2 million, gross profit of $71.9 million, an operating loss of $128.4 million, and a modestly greater net loss of $130.7 million.
And in the year ended January 31, 2021, GitLab’s revenue rose roughly 87% to $152.2 million from a year earlier. The company’s gross profit rose around 86% to $133.7 million, and operating loss widened nearly 67% to $213.9 million. Its net loss totaled $192.2 million.
This paints a picture of a SaaS company growing quickly at scale, with essentially flat gross margins (88%). Growth has not been inexpensive either — GitLab spent more on sales and marketing than it generated in gross profit in the past two fiscal years.
Web host Epik was warned of a critical website bug weeks before it was hacked
Hackers associated with the hacktivist collective Anonymous say they have leaked gigabytes of data from Epik, a web host and domain registrar that provides services to far-right sites like Gab, Parler and 8chan, which found refuge in Epik after they were booted from mainstream platforms.
In a statement attached to a torrent file of the dumped data this week, the group said the 180 gigabytes amounts to a “decade’s worth” of company data, including “all that’s needed to trace actual ownership and management” of the company. The group claimed to have customer payment histories, domain purchases and transfers, and passwords, credentials, and employee mailboxes. The cache of stolen data also contains files from the company’s internal web servers, and databases that contain customer records for domains that are registered with Epik.
The hackers did not say how they obtained the breached data or when the hack took place, but timestamps on the most recent files suggest the hack likely happened in late February.
Epik initially told reporters it was unaware of a breach, but an email sent out by founder and chief executive Robert Monster on Wednesday alerted users to an “alleged security incident.”
TechCrunch has since learned that Epik was warned of a critical security flaw weeks before its breach.
Security researcher Corben Leo contacted Epik’s chief executive Monster over LinkedIn in January about a security vulnerability on the web host’s website. Leo asked if the company had a bug bounty or a way to report the vulnerability. LinkedIn showed Monster had read the message but did not respond.
Leo told TechCrunch that a library used on Epik’s WHOIS page for generating PDF reports of public domain records had a decade-old vulnerability that allowed anyone to remotely run code directly on the internal server without any authentication, such as a company password.
“You could just paste this [line of code] in there and execute any command on their servers,” Leo told TechCrunch.
Leo ran a proof-of-concept command from the public-facing WHOIS page to ask the server to display its username, which confirmed that code could run on Epik’s internal server, but he did not test to see what access the server had as doing so would be illegal.
It’s not known if the Anonymous hacktivists used the same vulnerability that Leo discovered. (Part of the stolen cache also includes folders relating to Epik’s WHOIS system, but the hacktivists left no contact information and could not be reached for comment.) But Leo contends that if a hacker exploited the same vulnerability and the server had access to other servers, databases or systems on the network, that access could have allowed access to the kind of data stolen from Epik’s internal network in February.
“I am really guessing that’s how they got owned,” Leo told TechCrunch, who confirmed that the flaw has since been fixed.
Monster confirmed he received Leo’s message on LinkedIn, but did not answer our questions about the breach or say when the vulnerability was patched. “We get bounty hunters pitching their services. I probably just thought it was one of those,” said Monster. “I am not sure if I actioned it. Do you answer all your LinkedIn spams?”
Twitter Super Follows has generated only around $6K+ in its first two weeks
Twitter’s creator platform Super Follows is off to an inauspicious start, having contributed to somewhere around $6,000 in U.S. iOS revenue in the first two weeks the feature has been live, according to app intelligence data provided by Sensor Tower. And it’s made only around $600 or so in Canada. A small portion of that revenue may be attributed to Ticketed Spaces, Twitter’s other in-app purchase offered in the U.S. — but there’s no way for this portion to be calculated by an outside firm.
Twitter first announced its plans to launch Super Follows during its Analyst Day event in February, where the company detailed many of its upcoming initiatives to generate new revenue streams.
Today, Twitter’s business is highly dependant on advertising, and Super Follows is one of the few ways it’s aiming to diversify. The company is also now offering a way for creators to charge for access to their live events with Ticketed Spaces and, outside the U.S., Twitter has begun testing a premium product for power users called Twitter Blue.

Image Credits: Twitter
But Super Follows, which targets creators, is the effort with the most potential appeal to mainstream users.
It’s also one that is working to capitalize on the growing creator economy, where content creators build a following, then generate revenue directly through subscriptions — decreasing their own dependence on ads or brand deals, as a result. The platforms they use for this business skim a little off the top to help them fund the development of the creator tools. (In Twitter’s case, it’s taking only a 3% cut.)
The feature would seem to make sense for Twitter, a platform that already allows high-profile figures and regular folks to hobnob in the same timeline and have conversations. Super Follows ups that access by letting fans get even closer to their favorite creators — whether those are musicians, artists, comedians, influencers, writers, gamers, or other experts, for example. These creators can set a monthly subscription price of $2.99, $4.99, or $9.99 to provide fans with access to bonus, “behind-the-scenes” content of their choosing. These generally come in the form of extra tweets, Q&As, other interactions with subscribers.

Image Credits: Twitter
At launch, Twitter opened up Super Follows to a handful of creators, including the beauty and skincare-focused account @MakeupforWOC; astrology account @TarotByBronx; sports-focused @KingJosiah54; writer @myeshachou; internet personality and podcaster @MichaelaOkla; spiritual healer @kemimarie; music charts tweeter @chartdata; Twitch streamers @FaZeMew, @VelvetIsCake, @MackWood1, @GabeJRuiz, and @Saulsrevenge; YouTubers @DoubleH_YT, @LxckTV, and @PowerGotNow; and crypto traders @itsALLrisky and @moon_shine15; among others. Twitter says there are fewer than 100 creators in total who have access to Super Follows.
While access on the creation side is limited, the ability to subscribe to creators is not. Any Twitter iOS user in the U.S. or Canada can “Super Follow” any number of the supported creator accounts. In the U.S., Twitter has 169 million average monetizable daily active users as of Q2 2021. Of course, only some subset of those will be iOS users.
Still, Twitter could easily count millions upon millions of “potential” customers for its Super Follow platform at launch. Its current revenue indicates that, possibly, only thousands of consumers have done so, given many of the top in-app purchases are for creators offering content at lower price points.

Image Credits: Sensor Tower
Sensor Tower notes the $6,000 in U.S. consumer spending on iOS was calculated during the first two weeks of September (Sept. 1-14). Before this period, U.S. iOS users spent only $100 from August 25 through 31 — a figure that would indicate user spending on Ticketed Spaces during that time. In other words, the contribution of Tickets Spaces revenue to this total of $6,000 in iOS consumer spending is likely quite small.
In Canada, the other market where Super Follow is now available to subscribers, Twitter’s iOS in-app purchase revenue from September 1 through September 14 was a negligible $600. (This would also include Twitter Blue subscription revenue, which is being tested in Canada and Australia.)
Worldwide, Twitter users on iOS spent $9,000 during that same time, which would include other Ticketed Spaces revenues and tests of its premium service, Twitter Blue. (Twitter’s Tip Jar, a way to pay creators directly, does not work through in-app purchases).
Unlike other Twitter products that developed by watching what users were already doing anyway — like using hashtags or retweeting content — many of Twitter’s newer features are attempts at redefining the use cases for its platform. In a massive rush of product pushes, Twitter has recently launched tools for not just for creators, but also for e-commerce, organizing reading materials, subscribing to newsletters, socializing in communities, chatting through audio, fact-checking content, keeping up with trends, conversing more privately, and more.
Twitter’s position on the slower start to Super Follows is that it’s still too early to make any determinations. While that’s fair, it’s also worth tracking adoption to see if the new product had seen any rapid, of-the-gate traction.
“This is just the start for Super Follows,” a Twitter spokesperson said, reached for comment about Sensor Tower’s figures. “Our main goal is focused on ensuring creators are set up for success and so we’re working closely with a small group of creators in this first iteration to ensure they have the best experience using Super Follows before we roll out more widely.”
The spokesperson also noted Twitter Super Follows had been set up to help creators make more money as it scales.
“With Super Follows, people are eligible to earn up to 97% of revenue after in-app purchase fees until they make $50,000 in lifetime earnings. After $50,000 in lifetime earnings, they can earn up to 80% of revenue after in-app purchase fees,” they said.
Confluent CEO Jay Kreps is coming to TC Sessions: SaaS for a fireside chat
As companies process ever-increasing amounts of data, moving it in real time is a huge challenge for organizations. Confluent is a streaming data platform built on top of the open source Apache Kafka project that’s been designed to process massive numbers of events. To discuss this, and more, Confluent CEO and co-founder Jay Kreps will be joining us at TC Sessions: SaaS on Oct 27th for a fireside chat.
Data is a big part of the story we are telling at the SaaS event, as it has such a critical role in every business. Kreps has said in the past the data streams are at the core of every business, from sales to orders to customer experiences. As he wrote in a company blog post announcing the company’s $250 million Series E in April 2020, Confluent is working to process all of this data in real time — and that was a big reason why investors were willing to pour so much money into the company.
“The reason is simple: though new data technologies come and go, event streaming is emerging as a major new category that is on a path to be as important and foundational in the architecture of a modern digital company as databases have been,” Kreps wrote at the time.
The company’s streaming data platform takes a multi-faceted approach to streaming and builds on the open source Kafka project. While anyone can download and use Kafka, as with many open source projects, companies may lack the resources or expertise to deal with the raw open source code. Many a startup have been built on open source to help simplify whatever the project does, and Confluent and Kafka are no different.
Kreps told us in 2017 that companies using Kafka as a core technology include Netflix, Uber, Cisco and Goldman Sachs. But those companies have the resources to manage complex software like this. Mere mortal companies can pay Confluent to access a managed cloud version or they can manage it themselves and install it in the cloud infrastructure provider of choice.
The project was actually born at LinkedIn in 2011 when their engineers were tasked with building a tool to process the enormous number of events flowing through the platform. The company eventually open sourced the technology it had created and Apache Kafka was born.
Confluent launched in 2014 and raised over $450 million along the way. In its last private round in April 2020, the company scored a $4.5 billion valuation on a $250 million investment. As of today, it has a market cap of over $17 billion.
In addition to our discussion with Kreps, the conference will also include Google’s Javier Soltero, Amplitude’s Olivia Rose, as well as investors Kobie Fuller and Casey Aylward, among others. We hope you’ll join us. It’s going to be a thought-provoking lineup.
Buy your pass now to save up to $100 when you book by October 1. We can’t wait to see you in October!