Per Unit Economics: Dogs Per Night

Last week, I read an article about Rover.com, a dog-sitting marketplace receiving a third round of funding of $12MM, bringing their total funding to date to $25MM.

I didn’t give this much thought until later that night when I started to think a bit more about the per unit economics of such a business. The article talked about their growth from 10,000 sitters to 25,000 sitters in 2013, it discussed their eightfold increase in revenue between 2012 and 2013, and how sitters charge between $20 – $40 per night per dog, with Rover taking a 15% cut.

When you look at a marketplace like that, Rover has quite a lot of levers it can pull:

  • Growth
    • Grow the number of sitters
    • Grow the number of dog owners
    • Increase the number of active users
    • Expand into other pet categories
  • Revenue Optimization
    • Charge more than $20 – $40
    • Take more than a 15% cut

At this stage of the company, and with $25MM in funding, presumably most of their focus is on growing the service in their core category. So, what might that look like?

2012 Rover Revenue

Well, if 1% of Rover’s 10,000 sitters in 2012 were active on any given night, and they each sat one dog, Rover would have earned between $110,000 – $220,000 on an annualized basis. If we swag Rover’s run rate of $7MM annually*, at 40% – 60% active sitters in 2012, they’d have 4,000 – 6,000 dogs per night and Rover  would hit break-even, depending on whether sitters are charging closer to $20 or $40 per night.

So, as Rover grows throughout 2014 and beyond, one metric I expect they’re paying attention to is dogs per night. At what point does the business look primed for an exit? My guess is somewhere around 15,000 dogs per night mark. With an estimated 153,000 dogs in Seattle alone and 47% of households nationwide owning at least one dog, 15,000 dogs per night nationwide seems plausible, doesn’t it?

————

*So, what’s Rover’s run rate? Short answer, I don’t know. Speculative answer? Well, if we take Rover’s 43 employees and assume that, fully burdened, they cost an average of $100K per annum (though perhaps it’s more – it’s Seattle after all), then that puts Rover’s payroll  ~$4.3MM. Then we have to add in other operating costs and of course the cost of acquisition of sitters and pet owners. All told, I’m guessing their run rate is between $6MM – $7MM, which would mean this latest round of funding gave them another couple of years to work towards an exit…

iPython Notebooks And PuLP

Lately, we’ve begun working on various constrained optimization problems, and this was a good opportunity to use Python, and more specifically, iPython Notebooks which I’ve been learning about in recent days.

Assuming you’ve got iPython already working, try typing:

ipython notebook

If it works, great. If not, you may have some dependent python libraries to install e.g.

pip install pyzmq
pip install tornado

Once you’ve got the dependent libraries installed, typing ipython notebook should reveal a similar screenshot in your browser;

ipython notebook screenshot

Ok, with that out of the way, I’ve been using a library called PuLP to test out various optimization problems, and so far so good. There’s also a good introduction to PuLP with examples that you can follow. I recreated those examples in notebook and to show them in this blog post, I had to first convert the notebook to HTML format using nbconvert (you may need to pip install pygments to get it working):

ipython nbconvert tutorial.ipynb

I then had the option of hosting the HTML page somewhere (WordPress doesn’t seem to like the HTML page) or, the other option is simply using the nbviewer, which is what I used. In that case, you just pass in the URL of your .ipynb file and it creates a viewing friendly version of it.

Alternatively, you can just use WordPress’ code block functionality and paste in your code:

#Whiskas optimization problem
import pulp
#initialise the model
whiskas_model = pulp.LpProblem('The Whiskas Problem', pulp.LpMinimize)
# make a list of ingredients
ingredients = ['chicken', 'beef', 'mutton', 'rice', 'wheat', 'gel']
# create a dictionary of pulp variables with keys from ingredients
# the default lower bound is -inf
x = pulp.LpVariable.dict('x_%s', ingredients, lowBound =0)

# cost data
cost = dict(zip(ingredients, [0.013, 0.008, 0.010, 0.002, 0.005, 0.001]))
# create the objective
whiskas_model += sum( [cost[i] * x[i] for i in ingredients])

# ingredient parameters
protein = dict(zip(ingredients, [0.100, 0.200, 0.150, 0.000, 0.040, 0.000]))
fat = dict(zip(ingredients, [0.080, 0.100, 0.110, 0.010, 0.010, 0.000]))
fibre = dict(zip(ingredients, [0.001, 0.005, 0.003, 0.100, 0.150, 0.000]))
salt = dict(zip(ingredients, [0.002, 0.005, 0.007, 0.002, 0.008, 0.000]))
#note these are constraints and not an objective as there is a equality/inequality
whiskas_model += sum([protein[i]*x[i] for i in ingredients]) >= 8.0
whiskas_model += sum([fat[i]*x[i] for i in ingredients]) >= 6.0
whiskas_model += sum([fibre[i]*x[i] for i in ingredients]) <= 2.0
whiskas_model += sum([salt[i]*x[i] for i in ingredients]) <= 0.4

#problem is then solved with the default solver
whiskas_model.solve()
#print the result
for ingredient in ingredients:
	print 'The mass of %s is %s grams per can'%(ingredient, x[ingredient].value())

The next one is called the beer distribution problem. And no, drinking them all is not the answer…
 

#The Beer Distribution Problem for the PuLP Modeller
# Import PuLP modeler functions
import pulp
# Creates a list of all the supply nodes
warehouses = ["A", "B"]
# Creates a dictionary for the number of units of supply for each supply node
supply = {"A": 1000,
"B": 4000}
# Creates a list of all demand nodes
bars = ["1", "2", "3", "4", "5"]
# Creates a dictionary for the number of units of demand for each demand node
demand = {"1":500,
"2":900,
"3":1800,
"4":200,
"5":700,}
# Creates a list of costs of each transportation path
costs = [ #Bars
#1 2 3 4 5
[2,4,5,2,1],#A Warehouses
[3,1,3,2,3] #B
]
# The cost data is made into a dictionary
costs = pulp.makeDict([warehouses, bars], costs,0)
# Creates the 'prob' variable to contain the problem data
prob = pulp.LpProblem("Beer Distribution Problem", pulp.LpMinimize)
# Creates a list of tuples containing all the possible routes for transport
routes = [(w,b) for w in warehouses for b in bars]
# A dictionary called x is created to contain quantity shipped on the routes
x = pulp.LpVariable.dicts("route", (warehouses, bars), lowBound = 0, cat = pulp.LpInteger)
# The objective function is added to 'prob' first
prob += sum([x[w][b]*costs[w][b] for (w,b) in routes]), \
	"Sum_of_Transporting_Costs"
# Supply maximum constraints are added to prob for each supply node (warehouse)
for w in warehouses:
	prob += sum([x[w][b] for b in bars]) <= supply[w], \
		"Sum_of_Products_out_of_Warehouse_%s"%w
# Demand minimum constraints are added to prob for each demand node (bar)
for b in bars:
	prob += sum([x[w][b] for w in warehouses]) >= demand[b], \
		"Sum_of_Products_into_Bar%s"%b
# The problem data is written to an .lp file
prob.writeLP("BeerDistributionProblem.lp")
# The problem is solved using PuLP's choice of Solver
prob.solve()
# The status of the solution is printed to the screen
print "Status:", pulp.LpStatus[prob.status]
# Each of the variables is printed with it's resolved optimum value
for v in prob.variables():
	print v.name, "=", v.varValue
# The optimised objective function value is printed to the screen
print "Total Cost of Transportation = ", prob.objective.value()
Status: Optimal
route_A_1 = 300.0
route_A_2 = 0.0
route_A_3 = 0.0
route_A_4 = 0.0
route_A_5 = 700.0
route_B_1 = 200.0
route_B_2 = 900.0
route_B_3 = 1800.0
route_B_4 = 200.0
route_B_5 = 0.0
Total Cost of Transportation =  8600.0

RampUp 2014 Recap

Babbage Difference Engine

Babbage Difference Engine

Yesterday, I had the opportunity to attend LiveRamp’s ad-tech conference, held at the Computer History Museum in Mountain View. Below is a recap of the sessions I attended.

Opening Keynote

Google’s $100M dollar manNeal Mohan, kicked off the day by talking about:

  • Multi-device users and the implications that has for ad formats, ad measurement and attribution.
  • Incorporating user choice into the ads, citing YouTube’s TrueView feature in particular, where users have choice over which ads to watch, and advertisers only pay when the ad is actually viewed.
  • Combating ad fraud both on the inventory side and on the buy side
  • A focus on brand measurement. This one’s funny given that search is entirely a performance based advertising medium. However, there’s a growing argument being made in Silicon Valley that brands should be measuring their display campaigns not through a direct response lens but instead through a traditional TV advertising lens of things such as brand recall and awareness. It reminded me of Instagram adopting the same position last year. This is in contrast with Pat Connolly, the CMO of Williams-Sonoma, who also spoke on a panel and asserted that they’re entirely a performance marketer. Will brands actually buy into traditional media metrics for their digital spend? I think that remains an open question.

The one thing that Neal stressed time and again was the focus on trying to do what makes sense for the users. An example was when someone in the audience asked about injecting display ads into messaging apps, to which he gave a measured reply that they’d only consider doing something like that if there was a logical context for doing so.

Convergence of Offline and Online Data

This panel featured Rick Erwin of Experian, Scott Howe of Axciom, and Dave Jakubowski of Neustar. The main themes that emerged from this conversation were what was referred to as entity resolution both in terms of cross-device identification, multi-source 1st party customer data such as sales and customer service, and 3rd party data appends.

Regarding advertising on a particular channel, one of the panelists made the point that brands need to be thinking about owning the experience versus just owning the moment. A good reminder to not think in terms of email, mobile, desktop, etc. but instead think about the customer’s use case. I’ve seen this a lot where there are various vendors out there who’ll help with one use case of one channel, and the result is that the customer receives a disjointed experience when interacting across a variety of channels (and a variety of vendors).

TV advertising also cropped up, and particularly around addressable TV. Back when I worked on in-store TV networks around 2009, this was something people were beginning to explore and while it’s still early, it’s almost certainly just a matter of time before digital and TV campaigns are targeted to individual users. This also bleeds into dynamic creative which was something of a recurring theme. As more ad inventory becomes programmatic, it stands to reason that TV will eventually follow suit both in terms of RTB and dynamic creative.

Measurement was also something that came up, particularly in terms of digital advertising’s effect on in-store sales. This was something we did at DS-IQ circa 2010 and it’s strange to hear companies only now starting to develop scalable solutions in this area.

How Top Brands Use Data OnBoarding Today

This panel featured Brandon Bethea from Adaptive Audience, Nikhil Raj from Walmart Labs, and Tony Zito from Rakuten MediaForge. A couple of things stood out in this talk. The first was the depth of Walmart’s planning with their CPG suppliers. Nikhil naturally didn’t offer much detail but one could make a reasonable assumption that there’s a large amount of data sharing that takes place between the retailer and the brands. This has all kinds of advantages in terms of developing an understanding of the customer, and in terms of improving marketing outcomes for both brand and retailer marketing campaigns. It’s not clear if there’s a formal data exchange platform that’s common among Walmart and the brands but that’d certainly make a lot of sense.

The other thing that was discussed was the notion of lookalike modeling and also ad suppression. Again, nothing new, but simply a reflection of what was on their mind.

Data-Driven Retail Marketing Strategies

Panelists were Benny Arbel of myThings, Ryan Bonifacino of Alex & Ani, and Jared Montblanc of Nokia. Of note was some of the interesting things that Alex & Ani is doing around targeting for re-targeting campaigns and custom audience campaigns on Facebook through Kenshoo. Ryan cited Facebook as having been a good vehicle for new customer acquisition.

Jared of Nokia discussed how they evaluate their digital spend through the lens of Cost per High Quality Engagement. This makes sense in his world where Nokia is selling their devices through carrier partners. So, when they run a campaign, did a user not just click on a video but actually watch it, for example?

From the CMO: The Future of Data In  Marketing

This was one of the highlights for me, where Pat Connolly of Williams-Sonoma talked with Kirthi Kalyanam of Santa Clara University.

Observation 1: Connolly is one of those self-effacing humble execs who could easily be dismissed as old school based on appearances, and you’d be drawing the completely incorrect conclusion. The guy has been with Williams-Sonoma for 35 years going back to when they were strictly a catalog retailer, he’s demonstrably smart and has obvious command of some pretty technical details. For example, how many CMOs have you heard comfortably discuss technologies such as Hadoop, Teradata, and Aster in one breath and then discuss hazards modeling in the context of attribution in the other breath? To my knowledge, the only vendor doing survival analysis at scale is DataSong, and for Connolly to be in the weeds there was impressive.

Speaking of being in the weeds, Williams-Sonoma has a monthly marketing investment meeting that the CEO attends where junior analysts are presenting the details of various marketing campaigns. Talk about alignment – between direct (.com), marketing, and merch. Impressive.

Some other nuggets:

  • They can identify 50% – 60% of all web visitors, and aim to serve up recs under 40ms. That Connolly can recite the 40ms SLA made me smile, particularly since this is something we live and breath in the Data Lab.
  • They do about $2B in eCommerce and believe they’re the most profitable ecommerce retailer in the country.
  • There are ~100 variables in their regression models, but just one variable has 70% of the predictive value.
  • With a simple A/B test of making their Add to Cart button bigger, they added $20MM in incremental demand.
  • They can currently identify 30% of users across devices with a goal of 60% by the end of the year.
  • They consider tablet as their core site experience with desktop being simply a bigger version of tablet.
  • They consider their competitive advantage to be org alignment between merch, marketing, and direct. I wouldn’t disagree, knowing how difficult this can be.
  • They allow ad cost for acquiring new customers to be higher. Was simply a good example of enlightened decision-making where they aren’t simply trying to maximize ROAS on every single digital campaign.
  • While their paid marketing is entirely performance-driven, their owned media such as their blog is allowed to be more brand focused. Pat cited their West Elm brand which rarely features an actual product.

Measuring Digital Marketing’s Impact On In-Store Sales

Michael Feldman of Google, Gad Alon of Adometry, Kirthi Kalyanam of Santa Clara U, and Ben Whitmer of StageStores were featured panelists.

Gad mentioned that 40% of in-store demand is driven by digital media. Of that 40%, 70% could be attributed to display. I couldn’t find any data on the web to support this claim, and would be interested in hearing from others regarding this.

In-store measurement came up briefly but was surprised this wasn’t a bigger topic in the conference. Specifically, I’m talking about the kinds of things RetailNext, Nomi, and a host of others do.

Lastly, Peter Thiel was on deck for the closing talk. A summary of his discussion can be found here. There were a couple of salient points he made that I’ve been thinking about since. Will perhaps write more when I’m done digesting :-)

Final point – the overall quality of the sessions was by and large very good. It’s clear that there’s a ton of investment in this space and while some might argue strongly that there’s not a bubble, I’d at least say that were I starting a company right now, there’s no way it’d be in ad-tech – just too crowded and fragmented to be excited about.

Final final point – nice job by LiveRamp putting on this one day conference. Next year, it’d be a lot more powerful if the panelists reflected the audience a bit more i.e. I’m guessing half of the audience was female, yet throughout the day, I saw just one female panelist. This is something that’s really got me motivated to do something about.

What Media Fragmentation Can Teach Retail

In a week where Facebook bought WhatsApp for $19B, many pundits latched on to 450MM WhatsApp users that Facebook had just acquired, with particular emphasis given to WhatsApp’s penetration in International markets.

Realistically though, given Facebook’s penetration, this is not about acquiring acquiring unique users but simply retaining and acquiring more of existing users’ attention. Ben Evans astutely noted in his post, and there’s also a paper authored in 2012 by Professor James Webster of Northwestern University and Thomas Ksiazek of Villanova University entitled The Dynamics of Audience Fragmentation: Public Attention in an Age of Digital Media that offers some insights on this.

In the paper, the authors took a network analysis approach to measure the degree to which audiences overlap between different media outlets. Among the top 236 media outlets in their study, almost all overlapped with each other among audiences. Interestingly, the authors also note that in a world of expanding choice, users will bias towards the highest quality and most popular media content.

So, in a world of expanding retail choices, what can retail learn from media?

Competing For Attention

Richard Lanham, in his book, The Economics of Attention, said,

Assume that, in an information economy, the real scarce commodity will always be human attention and that attracting that attention will be the necessary precondition of social change. And the real source of wealth.

http://www.press.uchicago.edu/Misc/Chicago/468828.html

Just as Facebook and any other media property is competing for attention, so too are retailers. This suggests that while programmatic ad buying will of course continue to dominate marketing, there’s also a case to be made that retailers need to create more engaging digital experiences – stories that inform and entertain, a point of view…reasons deserving of attention.

Amazon offers their own unique approach to commanding customer attention by aiming to expand the number of products available from their store-front.

The Case For Differentiation

There are obvious parallels between retail and media, where there’s been an explosion of new eCommerce entrants in recent years. Additionally, this expansion of choice is likely to also materialize in the brick and mortar world if the likes of JC Penney, Sears, and Barnes & Noble continue their decline. With a glut of cheap commercial real estate coming available in the near future, there’ll be yet more opportunities for new retail entrants (with some of those same eCommerce companies going multi-channel).

In an era of pricing parity being the cost of doing business, customers will seek out differentiated, high-quality retail experiences. To deal with this, existing large retailers will have to focus on providing a best-in-class retail experience for every single category in which they choose to compete, because if they don’t, it’s a guarantee that a Warby Parker equivalent will fill the void, offering differentiation to overlooked categories that retailers take for granted.

The Persistence Of Popularity

Just as customers will seek out differentiated experiences, there’s also a bias to go with already popular choices because a) it’s already part of most people’s retail repertoire, b) it offers a sufficiently high quality experience, and c) there’s a social aspect to doing things that others are doing that offers opportunity for conversation around common topics.

Large retailers that are already popular have an opportunity to remain popular, provided they can retain a high quality customer experience. Easier said than done as foot traffic in malls is declining. Will retailers have the staying power to invest in differentiated IRL experiences?

The Importance Of Personalization

Given a world of essentially unlimited choice, the task of guiding customers will take on significantly more importance. In media, Webster and Ksiazek call recommenders and search systems, “user information regimes” where much of the media a user consumes is the result of recommendations. In retail, we’re still in the infancy of personalization with many retailers not offering much more than basic product recommendations. As retailers increasingly compete for attention, the marriage of storytelling and personalization offers some interesting opportunities to engage, inform, and inspire.

Pitching A Prospect

For the past year, I’ve been in the fortunate position where many companies – startups, consultants, and enterprise software companies alike would like to do business with my employer. As such, I’ve had the chance to meet a lot of interesting people, and learn about lots of innovation in the so-called “Big Data” space.

However, for much of my career, I sat on the side of the software vendor, aiming to sell our solution in to prospective customers. Now that I’m on the opposite side, I’m generally exposed to three types of pitches from prospective vendors;

  1. The easy mark
  2. The fishing expedition
  3. The professional

The first type of pitch is where the salesperson hasn’t done their homework, and they perceive the prospective customer as an easy mark. That is, their assumed starting point is that the prospective customer is uninformed, unsophisticated, and incapable of innovating. In these types of pitches, the salesperson asks next to no questions and is fixated only on getting through their deck and / or demo, which usually begins with a ridiculously simplified starting point, and a not insignificant amount of arrogant hand-waving. They’re so into their pitch that they don’t take the time to read the room, ask questions, and adjust the tone and / or depth of content to tailor the audience.

The second type of pitch is really not much of a pitch at all. In this scenario, the salesperson has (usually) a cursory understanding of the product they’re selling and as soon as they encounter anything resembling an objection or comment that deviates from their expected script, they embark on something that reminds you of the jilted partner who refuses to accept you’ve broken up with them. That is, the conversation turns into them trying too hard to be what they think you want / need. In these situations, better to switch gears and use this as a valuable data point by acknowledging there appears to not be an opportunity and instead asking relevant questions that can better prepare you for future prospect conversations.

The last type of pitch is, for lack of a better term, the professional. They know their product or service, they know your industry and company (and perhaps your area of responsibility), and most importantly, they’re confident enough in their read of the situation that sometimes the answer is no (and they’ll even initiate that realization), and that’s ok. They’re not about to waste their own time chasing a hopeless cause. If there’s an opportunity, then let’s talk some more. If not, let’s do each other a favor and acknowledge that fact.

The last type of salesperson is one that usually gets folks’ respect, if not necessarily always the business. So, before you get ready to pitch your prospect, ask yourself what kind of salesperson are you? Are you gearing up to treat your prospect as an easy mark? Are you about to walk in there on an undirected fishing expedition? Or, are you prepared to do your homework ahead of time, bring forth an actual solution (as opposed to a piece of software), and actively listen to what your prospect is communicating?

An Industry Association for Social Data: The Big Boulder Initiative

Jason Gowans:

Looking forward to rolling up my sleeves and working with these amazing people on the Big Boulder Initiative.

Originally posted on Thought Experiments:

Screen Shot 2013-12-06 at 10.09.37 AMA few weeks ago, I had the opportunity to participate in a working session of The Big Boulder Initiative, an industry association founded to promote understanding and development of the emerging  social data market.

It’s been an eventful week in the industry; Topsy was acquired by Apple earlier this week, and DataSift raised an impressive $42 million in their series C round of funding. With increasing momentum comes increasing complexity, and The Big Boulder Initiative has been convened  to identify, prioritize and begin to address the most pressing technology, business and consumer concerns affecting the future of the social data industry.

Here’s the video summarizing the event:

The Big Boulder Initiative from Gnip on Vimeo.

The issues we discussed were:

  • Privacy, Trust & Regulation
  • ROI & Value
  • Data Access
  • Data Standardization
  • Cost of Data
  • Data Quality & Validity

Here is a summary of what was discussed and agreed to

View original 255 more words

Start Up Where The Action Is

needle_haystack

Not a week goes by without hearing of the latest big data analytics startup that’s going to optimize a company’s marketing spend. While it’s true that there’s an incredible amount of innovation happening in marketing just now, what’s sometimes lost or perhaps ignored is the size of the potential impact of the startup to a retailer’s business. Depending on who you talk to, digital marketing as a percent of revenue is on average at 2.5% with US Retail spending almost $10B a year.

Sounds like a big market, and it is, but keep in mind most of the money is going to the big guys like Google, Facebook, and any number of the big publishers such as AOL, Yahoo!, and MSN.

So, let’s consider a hypothetical startup that’s in the business of optimizing a company’s social engagement. Further, let’s assume that they’re trying to sell their solution to a retailer doing $1B in revenue. If we accept that this retailer spends 2.5% of revenue on digital, that means they have a total budget of $25MM which is broken out thus, based on the analysis of how marketers allocate their digital marketing budgets:

Digital Budget Percent Dollars
Online advertising 12.5% $3,125,000
Content creation 11.6% $2,900,000
Corp site 10.7% $2,675,000
Search 10.7% $2,675,000
Email 9.6% $2,400,000
Analytics 9.5% $2,375,000
Social 9.4% $2,350,000
Mobile 7.4% $1,850,000
Commerce experience 7.2% $1,800,000
Video production 5.9% $1,475,000
Company blog 5.3% $1,325,000
Other 0.2% $50,000
TOTAL  100% $25,000,000

Taking the oft-cited 400% target ROAS (I’ll save the ROAS conversation for another post), this would mean that this startup is in the business of optimizing $9.4MM of this particular retailer’s revenue ($2.35MM * 400%). If we further assume that this retailer is running net margins of 10%, this means that social contributes just $940K to this retailer’s bottom line. Even if this startup can improve the retailer’s social ROAS by 50%, we’re still only talking about an incremental $470K to the bottom line.

Now, you compare that to the same $1B retailer whose gross margins are perhaps 40%, and their cost of revenue is $600MM. If you’re a startup in the business of optimizing say, merchandise allocation, it’s highly likely that your company’s potential impact is substantially larger than the $470K this social engagement startup could drive to the bottom line.

To be clear, I’m not saying that being in the business of optimizing digital marketing is a bad idea. Only that, when you set out to create a company, have in mind not just what’s hot and can attract funding (and is therefore also massively crowded), but also where there’s potential to create truly a huge impact for your target customers, perhaps less competition among other startups, and in an area more ripe for disruption…

Enhanced by Zemanta