Posted on

On lying AIs

A yellow-eyed cat tilts its eyes at the camera, gazing up from a grey bedspread. ‘London Trip’, is the AI’s title for this photo-montage ‘Memory’ plucked from the depths of my iPhone camera-roll. It’s selected a sad score of plinking piano and sweeping violin. The algorithm has calculated it must tug at the heart strings. 

Cut to a crop of a desk with a 2FA device resting on a laptop case. It’s not at all photogenic. On to a shot of a sofa in a living room. It’s empty. The camera inclines toward a radio on a sidetable. Should we be worried for the invisible occupant? The staging invites cryptic questions.

Cut to an outdoor scene: A massive tree spreading above a wrought iron park fence. Another overcast day in the city. Beside it an eccentric shock of orange. A piece of public art? A glass-blown installation? There’s no time to investigate or interrogate. The AI is moving on. There’s more data clogging its banks. 

Cut to a conference speaker. White, male, besuited, he’s gesticulating against a navy wall stamped with some kind of insignia. The photo is low quality, snapped in haste from the audience, details too fuzzy to pick out. Still, the camera lingers, panning across the tedious vista. A wider angle shows conference signage for something called ‘Health X’. This long distant press event rings a dim bell. Another unlovely crop: My voice recorder beside a brick wall next to an iced coffee. I guess I’m working from a coffee shop.

On we go. A snap through a window-frame of a well kept garden, a bird-bath sprouting from low bushes. Another shot of the shrubbery shows a ladder laid out along a brick wall. I think it looks like a church garden in Southwark but I honestly can’t tell. No matter. The AI has lost interest. Now it’s obsessing over a billboard of a Google Play ad: “All the tracks you own and millions more to discover — Try it now for free,” the text reads above a weathered JCDecaux brand stamp.

There’s no time to consider what any of this means because suddenly it’s nighttime. It must be; my bedside lamp is lit. Or is it? Now we’re back on the living room sofa with daylight and a book called ‘Nikolski’ (which is also, as it happens, about separation and connection and random artefacts — although its artful narrative succeeds in serendipity).

Cut to a handful of berries in a cup. Cut to an exotic-looking wallflower which I know grows in the neighbourhood. The score is really soaring now. A lilting female vocal lands on cue to accompany a solitary selfie.

I am looking unimpressed. I have so many questions. 

The AI isn’t quite finished. For the finale: A poorly framed crop of a garden fence and a patio of pot plants, washing weeping behind the foliage. The music is fading, the machine is almost done constructing its London trip. The last shot gets thrust into view: Someone’s hand clasping a half-drunk punch. 

Go home algorithm, you’re drunk.

Footnote: Apple says on-device machine learning powers iOS’ “intelligent photos experience” which “analyzes every 
photo in a user’s photo library using on-device machine learning [to] deliver 
a personalized experience for each user” — with the advanced processing slated to include scene classification, composition analysis, people and pets identification, quality analysis and identification of facial expressions

Read More

Posted on

AI-drawn voting districts could help stamp out gerrymandering

Gerrymandering is one of the most insidious methods out there of influencing our political process. By legally changing the way votes are collected and counted, the outcomes can be influenced — even fixed in advance for years. The solution may be an AI system that draws voting districts with an impartial hand.

Ordinarily, districts that correspond to electoral votes within a state are drawn essentially by hand, and partisan operatives on both sides of the aisle have used the process to create distorted shapes that exclude hostile voters and lock in their own. It’s so effective that it’s become commonplace — so much so there’s even a font made out of gerrymandered districts shaped like letters.

What can be done? Automate it — at least partially, say Wendy Tam Cho and Bruce Cain in the latest issue of Science, which has a special section dedicated to “democracy.” Cho, who teaches at the University of Illinois at Urbana-Champaign, has been pursuing computational redistricting for years, and just last year was an expert witness in an ACLU lawsuit that ended up overturning Ohio’s gerrymandered districts as unconstitutional.

In an essay explaining their work, they summarizes the approach thusly:

The way forward is for people to work collaboratively with machines to produce results not otherwise possible. To do this, we must capitalize on the strengths and minimize the weaknesses of both artificial intelligence (AI) and human intelligence.

Machines enhance and inform intelligent decision-making by helping us navigate the unfathomably large and complex informational landscape. Left to their own devices, humans have shown themselves to be unable to resist the temptation to chart biased paths through that terrain.

There are effectively an infinite number of ways you could divide a state into a given number of shapes, so the AI agent must be primed with criteria that limit those shapes. For instance, perhaps a state doesn’t want its districts to be any larger than 150 square miles. But then they must also account for shape — you don’t want a snakelike district slithering around the margins of others (as indeed occurs often in gerrymandered areas), or one to be enveloped by another. And then there are the innumerable historical, geographical and demographic considerations.

This illustration from Cho and Cain’s article shows a simplified version of a districting problem showing how partisan districts can be created depending on who’s drawing them. (Image credits: Cho/Cain/Science)

In other words, while the rationale for drawing must be set by people, it is machines that must perform “the meticulous exploration of the astronomical number of ways in which a state can be partitioned.”

Exactly how this would work would be up to the individual state, which will have its own rules and authorities as to how district maps are drawn. You see the problem immediately: We have entered politics, another complex landscape through which humans tend to “chart biased paths.”

Speaking to TechCrunch, Cho emphasized that although automation has potential benefits for nearly every state process, “transparency within that process is essential for developing and maintaining public trust and minimizing the possibilities and perceptions of bias.”

Some states have already adopted something like this, she pointed out: North Carolina ended up choosing randomly from 1,000 computer-drawn maps. So there is certainly a precedent. But enabling widespread use means creating widespread trust — something that’s in mighty short supply these days.

Mixing tech and politics has seldom proved easy, partly because of the invincible ignorance of our elected officials, and partly a justified distrust of systems that are difficult for the average citizen to understand and, if necessary, correct.

“The details of these models are intricate and require a fair amount of knowledge in statistics, mathematics and computer science but also an equally deep understanding of how our political institutions and the law work,” Cho said. “At the same time, while understanding all the details is daunting, I am not sure this level of understanding by the general public or politicians is necessary. The public generally believes in the science behind vaccines, DNA tests and flying aircraft without understanding the technical details.”

Indeed, few people worry whether the wings will fall off their plane, but planes have demonstrated their reliability over a century or so. And the greatest challenge for vaccines may be ahead of us.

“Society seems to have a massive trust deficit at the moment, a fact that we must work hard to reverse,” Cho admitted. “Trust should be and must be earned. We have to develop the processes that engender the trust.”

But the point stands: You don’t need to be a statistician or machine learning expert to see that the maps produced by these methods — peer reviewed and ready to put to use, it should be said — are superior and infinitely more fair than many of those whose boundaries as crooked as the politicians who manipulated them.

The best way for the public to accept something is to see that it works, and like mail-in voting, we already have some good points to show off. First, obviously, is the North Carolina system, which shows that a fair district can be drawn by a computer reliably, indeed so reliably that a thousand equally fair maps can easily be generated so there is no question of cherry-picking.

Second, the Ohio case shows that the maps can provide a fact-based contrast to gerrymandered ones, by showing that their choices can only be explained by partisan meddling, not by randomness or demographic constraints.

With AI it is usually wise to have a human in the loop, and doubly so with AI in politics. The roles of the automated system must be carefully proscribed, their limitations honestly explained, and their place within existing processes shown to be the result of careful consideration rather than expediency.

“The public needs to have a sense of the reflection, contemplation and deliberation within the scientific community that has produced these algorithms,” said Cho.

It’s unlikely these methods will enter wide use soon, but over the next few years as maps are challenged and redrawn for other reasons, it may (and perhaps should) become a standard part of the process to have an impartial system take part in the process.

Read More

Posted on

Deep Science: Dog detectors, Mars mappers and AI-scrambling sweaters

Research papers come out at far too rapid a rate for anyone to read them all, especially in the field of machine learning, which now affects (and produces papers in) practically every industry and company. This column aims to collect the most relevant recent discoveries and papers, particularly in but not limited to artificial intelligence, and explain why they matter.

This week in Deep Science spans the stars all the way down to human anatomy, with research concerning exoplanets and Mars exploration, as well as understanding the subtlest habits and most hidden parts of the body.

Let’s proceed in order of distance from Earth. First is the confirmation of 50 new exoplanets by researchers at the University of Warwick. It’s important to distinguish this process from discovering exoplanets among the huge volumes of data collected by various satellites. These planets were flagged as candidates but no one has had the chance to say whether the data is conclusive. The team built on previous work that ranked planet candidates from least to most likely, creating a machine learning agent that could make precise statistical assessments and say with conviction, here is a planet.

“A prime example when the additional computational complexity of probabilistic methods pays off significantly,” said the university’s Theo Damoulas. It’s an excellent example of a field where marquee announcements, like the Google-powered discovery of Kepler-90 i, represent only the earliest results rather than a final destination, emphasizing the need for further study.

In our own solar system, we are getting to know our neighbor Mars quite well, though even the Perseverance rover, currently hurtling through the void in the direction of the red planet, is like its predecessors a very resource-limited platform. With a small power budget and years-old radiation-hardened CPUs, there’s only so much in the way of image analysis and other AI-type work it can do locally. But scientists are preparing for when a new generation of more powerful, efficient chips makes it to Mars.

Read More

Posted on

What does GPT-3 mean for the future of the legal profession?

Historically, lawyers have struggled with some AI-based tools

One doesn’t have to dig too deep into legal organizations to find people who are skeptical about artificial intelligence.

AI is getting tremendous attention and significant venture capital, but AI tools frequently underwhelm in the trenches. Here are a few reasons why that is and why I believe GPT-3, a beta version of which was recently released by the OpenAI Foundation, might be a game changer in legal and other knowledge-focused organizations.

GPT-3 is getting a lot of oxygen lately because of its size, scope and capabilities. However, it should be recognized that a significant amount of that attention is due to its association with Elon Musk. The OpenAI Foundation that created GPT-3 was founded by heavy hitters Musk and Sam Altman and is supported by Mark Benioff, Peter Thiel and Microsoft, among others. Arthur C. Clarke once observed that great innovations happen after everyone stops laughing.

Musk has made the world stop laughing in so many ambitious areas that the world is inclined to give a project in which he’s had a hand a second look. GPT-3 is getting the benefit of that spotlight. I suggest, however, that the attention might be warranted on its merits.

Why have some AI-based tools struggled in the legal profession, and how might GPT-3 be different?

1. Not every problem is a nail

It is said that when you’re a hammer, every problem is a nail. The networks and algorithms that power AI are quite good at drawing correlations across enormous datasets that would not be obvious to humans. One of my favorite examples of this is a loan-underwriting AI that determined that the charge level of the battery on your phone at the time of application is correlated to your underwriting risk. Who knows why that is? A human would not have surmised that connection. Those things are not rationally related, just statistically related.

Read More

Posted on

Launched with $17 million by two former Norwest investors, Tau Ventures is ready for its closeup

Amit Garg and Sanjay Rao have spent the bulk of their professional lives developing technology, founding startups and investing in startups at places like Google and Microsoft, HealthIQ, and Norwest Venture Partners.

Over their decade-long friendship the two men discussed working together on a venture fund, but the time was never right — until now. Since last August, the two men have been raising capital for their inaugural fund, Tau Ventures.

The name, like the two partners, is a bit wonky. Tau is two times pi and Garg and Rao chose it as the name for the partnership because it symbolizes their analytical approach to very early stage investing.

It’s a strange thing to launch a venture fund in a pandemic, but for Garg and Rao, the opportunity to provide very early stage investment capital into startups working on machine learning applications in healthcare, automation and business was too good to pass up.

Garg had spent twenty years in Silicon Valley working at Google and launching companies including HealthIQ. Over the years he’d amassed an investment portfolio that included the autonomous vehicle company, Nutonomy, BioBeatsGlookoCohero HealthTerapedeFigure1HealthifyMe,  Healthy.io and RapidDeploy.

Meanwhile, Rao, a Palo Alto, Calif. native, MIT alum, Microsoft product manager and founder of the Accelerate Labs accelerator in Palo Alto, Calif., said that it was important to give back to entrepreneurs after decades in the Valley honing skills as an operator.

Image credit: Tau Ventures

Both Rao and Garg acknowledge that there are a number of funds that have emerged focused on machine learning including Basis Set Ventures, SignalFire, Two Sigma Ventures, but these investors lack the direct company building experience that the two new investors have.

Garg, for instance, has actually built a hospital in India and has a deep background in healthcare. As an investor, he’s already seen an exit through his investment in Nutonomy, and both men have a deep understanding of the enterprise market — especially around security.

So far, the company has made three investments automation, another three in enterprise software, and five in healthcare.

The firm currently has $17 million in capital under management raised from institutional investors like the law firm Wilson Sonsini and a number of undisclosed family offices and individuals, according to Garg.

Much of that capital was committed after the pandemic hit, Garg said. “We started August 29th… and did the final close May 29th.”

The idea was to close the fund and start putting capital to work — especially in an environment where other investors were burdened with sorting out their existing portfolios, and not able to put capital to work as quickly.

“Our last investment was done entirely over Zoom and Google Meet,” said Rao.

That virtual environment extends to the firm’s shareholder meetings and conferences, some of which have attracted over 1,000 attendees, according to the partners.

Read More

Posted on

Microsoft’s new Flight Simulator was worth the wait

It’s been 14 years since the launch of Flight Simulator X, which long seemed like it would be the final release in the long-running series. When the company announced it would re-launch the franchise just over a year ago, using a new graphics engine and satellite data from Bing Maps, it sure created a lot of hype among both old fans and those who had never played the older version but were drawn to the next-gen graphics the company showed off in its trailer. The good news is, the new Microsoft Flight Simulator was worth the wait and, starting August 18, you’ll be able to see for yourself.

Pricing starts at $59.99 for the standard version of Flight Simulator on both the Microsoft Store and Steam. If you want access to more planes and hand-crafted airports, you will need to buy either the $89.99 deluxe version or, for even more of those, the $119.99 premium version. You can find the details of which airports and planes are included in each version here.

Rest assured, though, especially if this is your first outing in Flight Simulator, with the base version you can still land at the same 36,000 airports as the others, and there are more than enough planes to keep you occupied — you’ll just miss out on a few extras (and if you really want to, you can buy upgrades to the more premium versions later).

The cheapest way to give the game a spin is to subscribe to the Xbox Game Pass for a month, because the standard edition is now part of Microsoft’s subscription program, and if you’re a new subscriber, the first month only costs $1.

I already dove pretty deeply into the beta a few weeks ago, but Microsoft provided me with an early review copy of the final release of the premium version, so it’s worth taking a second look at what you’ll get.

The first thing everybody I showed the new sim to told me was how beautiful it looks. That’s true for the scenery, which includes a mix of cities reconstructed in every detail thanks to the photogrammetry data in Bing Maps and those Microsoft partner Blackshark.ai reconstructed from the 2D maps (for more on how that works, here is our interview with Blackshark). What makes this work is not just the realistic cities and towns, but also that they feel pretty alive, with traffic zipping down highways and local streets and street lights and even the windows of houses lighting up at night.

And then there’s the weather model. Flight Simulator features the prettiest clouds you’ve ever seen in a game. Rain clouds in the distance look just like in real life. Wind acts realistically on your plane. If you fly in winter, snow covers the ground — and you can play around with all of those settings in real time without having to reload the game with every change.

Image Credits: TechCrunch

But since Microsoft and Asobo Studios decided to almost build a digital twin of our planet in Flight Simulator — and because the only way to do that is to use machine learning instead of placing every object by hand — you’ll still find plenty of oddness in the world, too. I had hoped that the team would fix more of these between the beta and final release, but I haven’t seen a lot of changes here. That means you’ll find bridges that look more like dams, roads that go under water and a few misplaced buildings and trees — there are so many trees where they don’t belong.

The way I look at this is that Flight Simulator is still a work in progress, and that hasn’t changed in the final release. I’m okay with that because even when there are mistakes, the cities and towns still usually look better than in any paid add-on for other flight simulators. Because a lot of this data is streamed from the Azure cloud and the team will continue to tweak its algorithms, I also expect that we’ll see fewer and fewer of these issues over time. Early on, I got hung up on this, but after a while, I realized that it doesn’t take away from enjoying the game — but it’s something to be aware of.

One area where I really hoped Microsoft would have improved the game, though, is air traffic control. This was always an area where Microsoft (and to be fair, all of its competitors) struggled. This was a problem during the alpha and beta, and it still is, which is really a shame, but what we have now just doesn’t feel very realistic.

Air traffic controllers don’t use standard phraseology (no real-life controller will ever tell you that he will contact you next when you leave his airspace, for example), don’t hand you off from tower to departure and constantly tell everybody to go around. I’m pretty sure I’ve done more go-arounds in three days with the final version of Flight Simulator than during the entire training for my pilot’s license. That feels like something that could be easily improved in the next update because, maybe even more so than the occasional graphics hiccup, it breaks the immersion for those looking for a simulator experience.

I also just wish that the controllers would call airlines by their real names. Microsoft has partnered with FlightAware to show real-life flights in the game, which depart and land on time, but somehow there are no liveries for them (except for the occasional stray United plane, which hints that we’ll see more of these over time) and only a limited set of models. Again, that’s something we’ll probably see more of in future updates.

Speaking of those flight models, Microsoft tweaked some of them a bit since the beta and, while I’ve never been in the cockpit of a 787, the single-engine Cessnas that I’ve flown still behave like I would expect them to in the sim (though I find the rudder is still pretty twitchy and needs some tweaking). I can’t vouch for the other aircraft in the game, but I expect real live pilots will find they are similarly realistic.

I still found some bugs with the flight instruments here and there and the GPS systems sometimes won’t let me activate a course, for example. I also wish the simulation of the G1000 and G3X glass cockpits would go just a little bit further. I can’t help but wonder if Microsoft and Asobo specifically held back here a bit to leave more room for add-on developers.

Image Credits: Microsoft

Performance hasn’t really changed since the beta, but I’m typically getting around 40 frames per second with the 2070 Super and i7-9700K, even when barely skimming over the roofs of cities like Barcelona or Berlin.

The only time I’ve seen real dips down into the 20s is when flying low over some of the hand-crafted airports like Frankfurt, and even then, after turning around and flying over the airport again, those numbers shot back up to the 40s.

You’ll notice that I used the words “simulator” and “game” interchangeably in this post. That’s because I think, in many ways, Flight Simulator is what you want it to be. There are plenty of game elements here, with flight training, landing challenges and bush-flying exercises. And in this age of COVID-19, there’s also something about it that just feels very relaxing when you’re flying around the planet low and slow, looking at the gorgeous scenery and forgetting about everything else for a while. I do worry, though, that most casual players will get bored after a short time.

For simmers, the new Flight Simulator is a godsend and provides a great basis for their hobby for years to come, especially given that Microsoft will continue to update it and because a lot of companies will develop all kinds of add-ons for it — and thanks to the inherent flaws in the game, there’s still room for somebody to not just build additional aircraft but also handcrafted versions of smaller airports, for example.

As I said in my preview, Flight Simulator is a technical marvel. Is it perfect? No. But I can forgive those imperfections because it does so much right.

Read More

Posted on

In conversation with European B2B seed VC La Famiglia

Earlier this month, La Famiglia, a Berlin-based VC firm that invests in seed-stage European B2B tech startups, disclosed that it raised a second fund totaling €50 million, up from its debut fund of €35 million in 2017.

The firm writes first checks of up to €1.5 million in European startups that use technology to address a significant need within an industry. It’s backed 37 startups to date (including Forto, Arculus and Graphy) and seeks to position itself based on its industry network, many of whom are LPs.

La Famiglia’s investors include the Mittal, Pictet, Oetker, Hymer and Swarovski families, industry leaders Voith and Franke, as well as the families behind conglomerates such as Hapag-Lloyd, Solvay, Adidas and Valentino. In addition, the likes of Niklas Zennström (Skype, Atomico), Zoopla’s Alex Chesterman and Personio’s Hanno Renner are also LPs.

Meanwhile, the firm describes itself as “female-led,” with founding partner Dr. Jeannette zu Fürstenberg and partner Judith Dada at the helm.

With the ink only just dry on the new fund, I put questions to the pair to get more detail on La Famiglia’s investment thesis and what it looks for in founders. We also discussed how the firm taps its “old economy” network, the future of industry 4.0 and what La Famiglia is doing — if anything — to ensure it backs diverse founders.

TechCrunch: You describe La Famiglia as B2B-focused, writing first checks of up to €1.5 million in European startups using technology to address a significant need within an industry. In particular, you cite verticals such as logistics and supply chain, the industrial space, and insurance, while also referencing sustainability and the future of work.

Can you elaborate a bit more on the fund’s remit and what you look for in founders and startups at such an early stage?

Jeannette zu Fürstenberg: Our ambition is to capture the fundamental shift in value creation across the largest sectors of our European economy, which are either being disrupted or enabled by digital technologies. We believe that opportunities in fields such as manufacturing or logistics will be shaped by a deep process understanding of these industries, which is the key differentiator in creating successful outcomes and a strength that European entrepreneurs can leverage.

We look for visionary founders who see a new future, where others only see fragments, with grit to push through adversity and a creative force to shape the world into being.

Judith Dada: Picking up a lot of signals from various expert sources in our network informs the opportunity landscape we see and allows us to invest with a strong sense of market timing. Next to verticals like insurance or industrial manufacturing, we also invest into companies tackling more horizontal opportunities, such as sustainability in its vast importance across industries, as well as new ways that our work is being transformed, for workers of all types. We look for opportunities across a spectrum of technological trends, but are particularly focused on the application potential of ML and AI.

Read More

Posted on

Deepfake video app Reface is just getting started on shapeshifting selfie culture

A bearded Rihanna gyrates and sings about shining bright like a diamond. A female Jack Sparrow looks like she’d be a right laugh over a pint. The cartoon contours of The Incredible Hulk lend envious tint to Donald Trump’s awfully familiar cheek bumps.

Selfie culture has a fancy new digital looking glass: Reface (previously Doublicat) is an app that uses AI-powered deepfake technology to let users try on another face/form for size. Aka “face swap videos”, in its marketing parlance.

Deepfake technology — or synthesized media, to give it its less pejorative label — is just getting into its creative stride, according to Roman Mogylnyi, CEO and co-founder of RefaceAI, which makes the eponymous app whose creepily lifelike output you may have noticed bubbling up in your social streams in recent months.

The startup has Ukrainian founders — as well as Mogylnyi, there’s Oles Petriv, Yaroslav Boiko, Dima Shvets, Denis Dmitrenko, Ivan Altsybieiev and Kyle Sygyda — but the business is incorporated in the US. Doubtless it helps to be nearer to Hollywood studios whose video clips power many of the available face swaps. (Want to see Titanic‘s Rose Hall recast with Trump’s visage staring out of Kate Winslet’s body? No we didn’t either — but once you’ve hit the button it’s horribly hard to unsee… 😷)

TechCrunch noticed a bunch of male friends WhatsApp-group-sharing video clips of themselves as scantily clad female singers and figured the developers must be onto something — a la Face App, or the earlier selfie trend of style transfer (a craze that was sparked by Prisma and cloned mercilessly by tech giants).

Reface’s deepfake effects are powered by a class of machine learning frameworks known as GANs (generative adversarial network) which is how it’s able to get such relatively slick results, per Mogylnyi. In a nutshell it’s generating a new animated face using the twin inputs (the selfie and the target video), rather than trying to mask one on top of the other.

Deepface technology has of course been around for a number of years, at this point, but the Reface team’s focus is on making the tech accessible and easy to use — serving it up as a push-button smartphone app with no need for more powerful hardware and near instant transformation from a single selfie snap. (It says it turns selfies into face vectors representing distinguishing user’s facial features — and pledges that uploaded photos are removed from its Google Cloud platform “within an hour”.)

No need for tech expertise nor lots of effort to achieve a lifelike effect. The inexorable social shares flowing from such a user friendly tech application then work to chalk off product marketing.

It was a similar story with the AI tech underpinning Prisma — which left that app open to merciless cloning, though it was initially only transforming photos. But Mogylnyi believes the team behind the video face swaps has enough of a head (ha!) start to avoid a similar fate.

He says usage of Reface has been growing “really fast” since it added high res videos this June — having initially launched with only far grainier GIF face swaps on offer.  In terms of metrics the startup us not disclosing active monthly users but says it’s had around 20 million downloads at this point across 100 countries. (On Google Play the app has almost a full five star rating, off of approaching 150k reviews.)

“I understand that an interest from huge companies might come. And it’s obvious. They see that it’s a great thing — personalization is the next trend, and they are all moving in the same direction, with Bitmoji, Memoji, all that stuff — but we see personalized, hyperrealistic face swapping as the next big thing,” Mogylnyi tells TechCrunch.

“Even for [tech giants] it takes time to create such a technology. Even speaking about our team we have a brilliant team, brilliant minds, and it took us a long time to get here. Even if you spawn many teams to work on the same problems surely you will get somewhere… but currently we’re ahead and we’re doing our best to work on new technologies to keep in pace,” he adds.

Reface’s app is certainly having a moment right now, bagging top download slots on the iOS App Store and Google Play in 100 countries — helped, along the way, by its reflective effects catching the eye of the likes of Elon Musk and Britney Spears (who Mogylnyi says have retweeted examples of its content).

But he sees this bump as just the beginning — predicting much bigger things coming down the sythensized pipe as more powerful features are switched on. The influx of bitesized celebrity face swaps signals an incoming era of personalized media, which could have a profoundly transformative effect on culture.

Mogylnyi’s hope is that wide access to synthensized media tools will increase humanity’s empathy and creativity — providing those who engage with the tech limitless chances to (auto)vicariously experience things they maybe otherwise couldn’t ever (or haven’t yet) — and so imagine themselves into new possibilities and lifestyles.

He reckons the tech will also open up opportunities for richly personalized content communities to grow up around stars and influencers — extending how their fans can interact with them.

“Right now the way influencers exist is only one way; they’re just giving their audience the content. In my understanding in our case we’ll let influencers have the possibility to give their audience access to the content and to feel themselves in it. It’s one of the really cool things we’re working on — so it will be a part of the platform,” he says.

“What’s interesting about new-gen social networks [like TikTok] is that people can both be like consumers and providers at the same time… So in our case people will also be able to be providers and consumers but on the next level because they will have the technology to allow themselves to feel themselves in the content.”

“I used to play basketball in school years but I had an injury and I was dreaming about a pro career but I had to stop playing really hard. I’ll never know how my life would have gone if I was a pro basketball player so I have to be a startup entrepreneur right now instead… So in the case with our platform I actually will have a chance to see how my pro basketball career would look like. Feel myself in the content and life this life,” he adds.

This vision is really the mirror opposite of the concerns that are typically attached to deepfakes, around the risk of people being taken in, tricked, shamed or otherwise manipulated by intentionally false imagery.

So it’s noteworthy that Reface is not letting users loose on their technology in a way that could risk an outpouring of problem content. For example, you can’t yet upload your own video to make into a deepfake — although the ability to do so is coming. For now, you have to pick from a selection of preloaded celebrity clips and GIFs which no one would mistake for the real-deal.

That’s a very deliberate decision, with Mogylnyi emphasizing they want to be responsible in how they bring the tech to market.

User generated video and a lot more — full body swaps are touted, next year — are coming, though. But before they turn on more powerful content generation functionality they’re working on building a counter tech to reliably detect such generated content. Mogylnyi says it will only open up usage once they’re confident of being able to spot their own fakes.

“It will be this autumn, actually,” he says of launching UGC video (plus the deepfake detection capability). “We’ll launch it with our Face Studio… which will be a tool for content creators, for small studios, for small post production studios, maybe some music video makers.”

“We also have five different technologies in our pipeline which we’ll show in the upcoming half a year,” he adds. “There are also other technologies and features based on current tech [stack] that we’ll be launching… We’ll allow users to swap faces in pictures with the new stack and also a couple of mechanics based on face swapping as well, and also separate technologies as well we’re aiming to put into the app.”

He says higher quality video swapping is another focus, alongside building out more technologies for post production studios. “Face Studio will be like an overall tool for people who want full access to our technologies,” he notes, saying the pro tool will launch later this year.

The Ukrainian team behind the app has been honing their deep tech chops for years — starting working together back in 2011 straight out of university and going on to set up a machine learning dev shop in 2013.

Work with post production studios followed, as they were asked to build face swapping technology to help budget-strapped film production studios do more while having to move their actors move around less.

By 2018, with plenty of expertise under their belt, they saw the potential for making deepface technology more accessible and user friendly — launching the GIF version of the app late last year, and going on to add video this summer when they also rebranded the app to Reface. The rest looks like it could be viral face swapping tech history…

So where does all this digital shapeshifting end up? “In our dreams and in our vision we see the app as a personalization platform where people will be able to live different lives during their one lifetime. So everyone can be anyone,” says Mogylnyi. “What’s the overall problem right now? People are scrolling content, not looking deep into it. And when I see people just using our app they always try to look inside — to look deeply into the picture. And that’s what really inspires us. So we understand that we can take the way people are browsing and the way they are consuming content to the next level.”

Read More

Posted on

Conversational analytics are about to change customer experiences forever

Companies have long relied on web analytics data like click rates, page views and session lengths to gain customer behavior insights.This method looks at how customers react to what is presented to them, reactions driven by design and copy. But traditional web analytics fail to capture customers’ desires accurately. While marketers are pushing into predictive analytics, what about the way companies foster broader customer experience (CX)?

Leaders are increasingly adopting conversational analytics, a new paradigm for CX data. No longer will the emphasis be on how users react to what is presented to them, but rather what “intent” they convey through natural language. Companies able to capture intent data through conversational interfaces can be proactive in customer interactions, deliver hyper-personalized experiences, and position themselves more optimally in the marketplace.

Direct customer experiences based on customer disposition

Conversational AI, which powers these interfaces and automation systems and feeds data into conversational analytics engines, is a market predicted to grow from $4.2 billion in 2019 to $15.7 billion in 2024. As companies “conversationalize” their brands and open up new interfaces to customers, AI can inform CX decisions not only in how customer journeys are architected–such as curated buying experiences and paths to purchase–but also how to evolve overall product and service offerings. This insights edge could become a game-changer and competitive advantage for early adopters.

Today, there is wide variation in the degree of sophistication between conversational solutions from elementary, single-task chatbots to secure, user-centric, scalable AI. To unlock meaningful conversational analytics, companies need to ensure that they have deployed a few critical ingredients beyond the basics of parsing customer intent with natural language understanding (NLU).

While intent data is valuable, companies will up-level their engagements by collecting sentiment and tone data, including via emoji analysis. Such data can enable automation to adapt to a customer’s disposition, so if anger is detected regarding a bill that is overdue, a fast path to resolution can be provided. If a customer expresses joy after a product purchase, AI can respond with an upsell offer and collect more acute and actionable feedback for future customer journeys.

Tap into a multitude of conversational data points

Read More

Posted on

The essential revenue software stack

From working with our 90+ portfolio companies and their customers, as well as from frequent conversations with enterprise leaders, we have observed a set of software services emerge and evolve to become best practice for revenue teams. This set of services — call it the “revenue stack” — is used by sales, marketing and growth teams to identify and manage their prospects and revenue.

The evolution of this revenue stack started long before anyone had ever heard the word coronavirus, but now the stakes are even higher as the pandemic has accelerated this evolution into a race. Revenue teams across the country have been forced to change their tactics and tools in the blink of an eye in order to adapt to this new normal — one in which they needed to learn how to sell in not only an all-digital world but also an all-remote one where teams are dispersed more than ever before. The modern “remote-virtual-digital”-enabled revenue team has a new urgency for modern technology that equips them to be just as — and perhaps even more — productive than their pre-coronavirus baseline. We have seen a core combination of solutions emerge as best-in-class to help these virtual teams be most successful. Winners are being made by the directors of revenue operations, VPs of revenue operations, and chief revenue officers (CROs) who are fast adopters of what we like to call the essential revenue software stack.

In this stack, we see four necessary core capabilities, all critically interconnected. The four core capabilities are:

  1. Revenue enablement.
  2. Sales engagement.
  3. Conversational intelligence.
  4. Revenue operations.

These capabilities run on top of three foundational technologies that most growth-oriented companies already use — agreement management, CRM and communications. We will dive into these core capabilities, the emerging leaders in each and provide general guidance on how to get started.

Revenue enablement

Read More