The Robots Are Coming For Your Wardrobe

The husband and wife duo behind Chennai-born AI startup Mad Street Den are determined to change the way you shop for clothes.

The Robots Are Coming For Your Wardrobe
[Photo: Flickr user Jack Wallsten]

Artificial intelligence is rapidly changing just about every industry, with billions being poured into algorithms for everything from self-driving cars to detecting cancer to chatbots. Not so much fashion.


Mad Street Den, a three-year-old artificial intelligence startup founded by a husband and wife team in Chennai, India, is boldly going where few startups have gone before. The couple’s complementary career paths have helped. CEO Ashwini Asokan was previously with Intel Labs working on product design; her husband and CTO, Anand Chandrasekaran, is a neuroscientist who switched over to AI when he realized the technology wasn’t quite there yet to build silicon brains.

After living and working in the U.S for years, they returned to their country of birth in 2014 to build a startup, tooling around with a few different ideas before landing on fashion. Since then, they’ve developed over a dozen AI-assisted tools for online retailers, many of them using computer vision algorithms that can suggest pieces to finish an outfit or meet a customer’s tastes. The company says that in an internal study, online shoppers spent 72 minutes on websites where its software was deployed, compared to 25 minutes on sites without it. 

Ashwini Asokan


Among the clients they’ve racked up are a mix of internet marketplaces, brands, and big box stores. Their international clients include Villoid, a fashion app cofounded by British model Alexa Chung; HipVan, a Singaporean decor and furniture brand; and Indian online platforms Tata CLiQ and The LabelLife. In the U.S. the company works with Thredup, a fashion resale website based in San Francisco, Shoprunner, and an iconic denim brand that they won’t name. (Its own mysterious name is a hodgepodge of high-tech and human, Asokan explains: Mad stands for “mind able devices,” den is what their first space was like and street comes from a night where one too many drinks were had.)

Investors are also taking notice. Mad Street Den recently raised an undisclosed Series A round of funding from Sequoia Capital India along with another infusion from existing investors Exfinity Ventures and growX Ventures. From launching with just one other cofounder and employee, neuroscientist Costa Colbert, they have grown to 45 employees with a headquarters in San Francisco, and offices in London and Chennai. 


Fast Company: So what exactly does your software do for fashion?

Ashwini Asokan: The first one that we’ve made some fantastic progress on is the suite called Vue Commerce. It’s basically an end-to-end AI-assisted onsite or on-app set of products. So people are browsing a site or looking up on their app and we’re basically trying to understand what they are looking for, at the starting levels, of color pattern necklines, sleeve length, styles. We’re trying to look at user behavior, history, and their cohort.

We’re looking at it at a completely meta level. I think traditionally people have looked at it as big data, but we’re actually looking at it and even the visual aspects of every little thing that the user is doing. At the end of the day fashion is super visceral. It’s super visual. [But] we’ve pretty much stripped that entire experience off of our apps and our sites. We’re missing that tangible and almost emotional experience one has when you go and try something on in a store, when you actually touch and feel something. A big question that we were asking ourselves is, how can we use computer vision to change that experience online?

FC: What does that look like, specifically?


Anand Chandrasekaran: We have what’s called dynamic personalization, and that becomes relevant when a site has 200,000 dresses. Which of 200,000 should I show you? We can’t pick up all the cues immediately obviously. But in time we can show you 200 of the 200,000 dresses that are relevant to you. And it’s our job and the job of our AI to start picking up those kinds of things.

Anand Chandreasekaran

AA: So if you’re looking at, let’s say, a pink polka dot A-line dress. Chances are you like the shape of A-line dresses and like pink polka dots. But people like different types of pink. Maybe you like baby pink and I like fuchsia—”pink” is not pink. I might start with that polka dotted dress but I might go down a very different path.

Let’s say I’m only interested in A-line. And so I’m willing to go around very specific kinds of styles, while you on the other hand are only looking for specific types of pink. Both of us end up having entirely different experiences on that site. And whether it’s product page recommendations or ensemble creations, we do personalized recommendations with every single click that you make on a given app or a site.

FC: Do you think that the “if you like that, you’ll like this” approach could inadvertently keep us in our fashion bubbles, the way our algorithmically designed news feeds keep us in personalized filter bubbles and echo chambers?

AA: If you are always going to show me something I’m expecting to see, at some point I’m going to get really bored and move on. The question we ask ourselves is, how do we inject randomness at regular intervals?


On the other hand, it’s all about surprise. How do you surprise someone without pissing them off? It’s wonderful to show someone florals, but if you do it all the time, it becomes really boring. But what happens when you start to understand what florals are all about? It’s about nature—or maybe you like the texture it usually appears on—we all have our own preferences.

We do so much understanding of the intent of each and every user. Is this someone that is trying to look at breezy, airy, earthy prints? You can slowly start expanding it to adjacent materials and patterns. You are looking at theme and intent, you’re not looking at it very literally, like, ‘I’m going to go get you more florals.’ We are looking at cohorts, too. We are looking at people like you. Then you inject more variance into the journey.

Read more: AI Can Make Us All Dress Better. So Why Isn’t The Fashion Industry Using It More?


FC: Why you’d choose to launch your company in Chennai, versus, say, New York ?

AA: I had gone up and spoken to the who’s who in computer vision in Silicon Valley. It became very evident they were very jaded by computer vision. They kept saying “Oh no, we’ve seen this stuff, like 10 or 20 years ago. This stuff never goes anywhere.” There wasn’t anything that people were like, “oh my god” about, which was a sign to us.

We want to experiment through the industry and through the market with partners as opposed to just building science projects inside our labs. Moving to India opened up the Asian market for us in ways that I had never imagined. 

In Asia everyone’s door was open. Every other retailer or e-commerce site was like, “come on let’s do this together.” So there was some extensive work that we did in the early days with the retailers in Asia, which pretty much allowed us to develop our products for that industry.


A couple of months ago, we closed our Series A with Sequoia and I think the reason we got there was because we started in Asia. Now of course we’re pretty much completely in the U.S. But Asia totally leapfrogged the U.S. They were at least two and a half years ahead of anyone else, I would say.

FC: I know Gilt and other fashion groups are doing interesting work in AI but there don’t seem to be a lot of startups around fashion and AI. I’m curious if you think this is sort of like a blind spot for a lot of more typical tech companies, who maybe are going to be a lot of young men in the Bay Area who don’t think to innovate in fashion.

AA: Yeah, You know the lack of bias does help.

FC: I was trying to ask about that as nicely as possible. How does diversity play in?

AA: We talk about this all the time. I am very vocal when it comes to the world of gender and I do a lot of work in that space. I am also interested in the trajectory of technology, the intersection of people and quality products and looking at it from a perspective of design. If you look at the industries that are the fastest adopters of technology, they’re usually porn, sports, and fashion, and actually religion.


Retail was the last of our experiments, by the way. We’ve done stuff in gaming, we’ve done stuff in analytics. We’ve done quite a few of our experiments in the space of facial recognition and motion expression recognition. We’ve built up quite a few models. Finally, the models that we ended up doing actually didn’t use a lot of those core technology pieces.

We gave up on gaming because we realized that the industry was not ready. The same thing with analytics, the same thing with so many. We were thankfully very connected when we came to India. We’ve known all these VCs for a while and people kept giving us introductions, and it didn’t matter. These industries were just not ready for this. And retail on the other hand opened up and swallowed us whole. They still can’t seem to get enough of us.

I would say in the last 12 to 18 months there are a lot of people that have cropped up in this space. Like way more than I have seen in the past, but for us we’re not just looking at fashion. We’re now moving into furniture. We’re moving into all things retail, including offline. We’re looking at retail as the entire value chain and operational stuff. You need to have an open mind and to know that this is one of the industries that’s adopted technology way quicker than most others have. And part of it is not having that bias.

FC: I’m sure you are crazy busy but you blog a lot and give talks about AI, a lot of it trying to dispel the Terminator narrative that comes up a lot. Why do you think it’s important to be educating the wider public about AI?

AA: To be honest at the end of the day nobody cares about a lot of this stuff, people are going to go ahead and do what they’re going to do. Like, all this AI-madness-killer-robot talk is a press-generated thing. People in their day-to-day are like, ‘whatever’ Sure, they care about their privacy or maybe a few things here and there, but I think what happens when people suddenly see something that changes their life, that brings something of super value to them. Or you show something to a company—that all of this reduces 30% of their costs when it comes to operations—that is magic.


We believe that the magic should be in the products and what happens when business industries looks at our products. But education has to be done in very layman, laywoman terms, completely demystifying the topic and letting people know exactly what it is that you’re doing.

AC: You know, we already have experience building intelligence. It’s our children and they are pretty useless when they are born. They have to go through a lot of training. They have to be tempered by society before they are of any use to mankind in general. So how is that going to be any different from any other intelligence that we build? And that’s kind of the space that we are operating in. Saying that we can build intelligence is one thing; making that intelligence useful or even worthy of being spoken about in the terms of our potential successors or partners takes a lot of work.

FC: What’s in store for the company and AI in general, as you look ahead?

AC: Let me dodge that question. I’m not going to make that mistake again of thinking that AI can be built in isolation. It will be done the way the market decides it. So we will continue to be building useful products.

I’ve actually said the future of this is going to be in hardware. There is no way we can be building AI systems at scale without solving the fundamental issue of how much power we are consuming to run these tasks. That will actually bring things to the next level, when we can start putting some of that intelligence in as small of a chip that can go in phones. You’ll be putting a lot more intelligence out there and that will further enable us to bring intelligence to a completely different level.


There are people out there building deep learning networks that are very good. But they are very computationally heavy. If you talk with any deep learning practitioners, they say a lot of the work is converting the data, whether that’s audio or images, into signals that are more abstract, and coming up with meaningful inferences. Moving computation into neuromorphic chips, you can move that processing very close to the data it’s working with. If you put it in a phone, you can work with all the photos and videos on your phone. It’s not saying you’re moving the technology to the phone, but you’re going to be able to do a lot of the processing on your phone before streaming it to the cloud for larger brains to tackle.