Nahla Salem Nahla Salem

How Company Culture Affects Product Success

I recently gave a talk with Product School about how company culture affects product direction and success. It is now uploaded on YouTube so sharing it here.

I discuss what culture is, how different types of product require different cultures, how to trace “product symptoms” back to culture root causes, and how to build/change culture.

I think culture is the missing link between product development and product success. I think this talk will be useful for product leaders (not necessarily people managers) looking to understand and build the best culture for their product.

Enjoy!

Webinar: How Company Culture Affects Product Success by Nahla Salem - YouTube

Read More
Nahla Salem Nahla Salem

What Marketplace Product companies can learn from Airbnb

I’ve been following Airbnb for years, and I think it’s an outstanding example of a successful marketplace product company. Airbnb lost 80% of its business in the span of a few weeks when COVID-19 hit but was able to rise from the ashes. I’ve learned several lessons from Airbnb over the years and will share some of them here.

Playing the right game

There are three big product games: 

  1. Attention (think social media)

  2. Productivity (think Microsoft)

  3. Transactions

Most marketplace companies are Transactions companies (think Uber and Instacart). Airbnb too is a Transactions company and its North Star Metric is/was Number of Nights booked. This business model and north star allow Airbnb to be aligned towards the success of both sides of its marketplace.

Airbnb has also benefited from the network effects of a consumer product marketplace. Their very initial success was in cosmopolitan NY. From there, hosts would tell their friends about Airbnb, friends would become guests, people would come to NY as guests then go back all over the world spreading the Airbnb idea with them, and guests would sometimes become hosts themselves. Then it wasn’t just NY.

However, Airbnb didn’t start out as a Transactions company. True to the stages of Marketplaces that Andrew Chen defines, Airbnb started out as a directory. It was a directory of places you can stay when there is a big event that’s likely to max out hotels. They then added payments and matured the experience to an end-to-end one. According to Chen, we are now in the “Managed Marketplace Era” where marketplaces take on additional value-add beyond being just an intermediary. Which is indeed what Airbnb is doing now with its current focus on Experiences and remote work.

Agility

Airbnb has been successful in staying close to its users and changing course based on market feedback. Here are some examples:

  1. Aibnb’s very initial idea was to focus on cities hosting big events, and people using a spare room/air bed (hence the name) to host attendees that couldn’t find a hotel. But then people started asking if they can use the service outside of events. The rest is history. Fast forward to 2021 where big-company Airbnb has a mature experimentation culture and famously ran 700+ experiments per week!

  2. Airbnb used to be big on photos and names as a way for hosts to trust guests, but now hides guest photos until a host accepts a booking in order to avoid hosts rejecting guests based on racial discrimination. This was not an easy decision, but a drawn out process that you can read more about here.

  3. Airbnb had a difficult time during COVID. They lost 80% of their business in the span of a few weeks. I admired the fact that Airbnb chose to refund guests who had to cancel because of COVID in the spring of 2020 (I was one of those people). However, Airbnb was able to go public during the pandemic, restructure the company to be leaner, come up with a vision of how post-pandemic travel will be different, and execute on it.

Innovation/Focus

Perhaps counterintuitively, I see Innovation/Focus as two sides of the same coin. Steve Jobs famously said “People think focus means saying yes to the thing you’ve got to focus on.. Innovation is saying ‘no’ to 1,000 [good] things.”

Airbnb has consistently innovated in a focused way, showing vision and direction from leadership. For example, the status quo paradigm for travel websites was flights + stays. As Airbnb grew and looked to expand, its leadership consciously chose to stay away from flights, despite flights being a “natural” vertical expansion to the Stays use case. That’s because the flights space is commoditized and there’s little value for Airbnb to add. 

Instead, back in 2016, Airbnb launched Experiences, allowing users to book activities related to their Stays -both a new source of revenue and a way to affect Airbnb’s north star of nights booked. Then the pandemic hit and Airbnb halted its investment in (physical) Experiences. However, Airbnb launched online experiences only a few weeks after the world came to a stop; April 9th 2020, to be precise (here’s more to Airbnb’s agility!). Fast forward to 2022 where Brian Chesky announced the picking up of investing in Experiences as part of Airbnb’s vision of post-pandemic travel.

Last words

This post has been fairly flattering of Airbnb, and has ignored aspects of Airbnb’s business such as its effect on rental prices and housing availability. I also have questions about how niche/narrow the focus use cases of experiences/long stays are. That said, Airbnb is a wonderful example of a marketplace product company that’s playing the right game. Airbnb has been able to change and respond to user needs and changes in its ecosystem, and to consistently innovate for a duration of 15 years. There is a lot for us to learn from Airbnb.

—-

Sources

What’s next for marketplace startups? Reinventing the $10 trillion service economy, that’s what.

How Airbnb Disrupted The Hospitality Industry

Every Product Needs a North Star Metric: Here’s How to Find Yours

North Star Metric - Guide (with examples)

Airbnb CEO says he's not focused on share prices

Airbnb Answers: Guest profile photos

‪What data experiments tell us about racial discrimination on Airbnb‬

Airbnb CEO Brian Chesky at Skift Global Forum 2022

Read More
Nahla Salem Nahla Salem

Product Innovation & how it happens

As part of the Reforge Product Strategy course I’m taking, Fareed Mosavat, previously of Slack, shared yesterday a great case study about Slack’s Growth loops and how they tackled increasing conversion of paid users.


Fareed explained how his team started out from the ‘big hairy problem’ of self-serve monetization, worked on understanding the monetization engine, then identifying optimization opportunities. They were then able to pinpoint ‘smaller’ problems of user awareness in addition to identifying Saving message history as an important user problem.

To me this is a wonderful example of product innovation. Product innovation doesn’t mean coming up with a crazy world-changing idea, but, counterintuitively perhaps, a discipline in a discovery process, and understanding user problems before jumping into solutions.

For that innovation to happen, I believe two things need to be in place:

  • Product teams need to have autonomy. This means going into a quarter not knowing what to build, but knowing very well the goal we’re trying to achieve.

  • The planning process needs to support the above, focus on goals and problems, and stay away from solutions as much as possible.

If you want to learn more about monetization, highly recommend this talk by Fareed.

To learn more about a planning process that allows innovation, highly recommend Gibson Biddle’s How to Run a Quarterly Product Strategy Meeting with Gibson Biddle, former VP Product, Netflix.

Read More
Nahla Salem Nahla Salem

How to best integrate ML into your software process

Here is to why starting a “Machine Learning team” is a big mistake.

Over the years, as ML became an integral part of product development, I have witnessed the journeys of many companies/startups to integrate Machine Learning (ML) into their software development process. Compared to non-ML software, ML requires a different calibre, operates on a different cadence/process, and involves a much higher degree of uncertainty, which presents a challenge.

Team structures in the Tech industry

Let’s think back to how team structures evolved in the tech industry. We started out with technical teams such as front-end teams and back-end teams, and those of us old enough will remember the time when ‘mobile teams’ were added to our companies. However, this structure involved having to talk to too many people to execute on a project that required multiple skill sets. And hand-offs were a nightmare.

In came agile software development, and the new standard became self-organizing cross-functional teams. Spotify took that to the next level when it formalized the concept and practices around it, and gave us handy terms to use such as “Squads” for self-organizing cross-functional teams.

“Squads” became the new industry standard. According to Marty Cagan, the FAANG norm is “Empowered” cross functional teams that are given business context and goals, then allowed to innovate and iterate over problems.

ML teams

Because ML is such a specialized discipline and one that is less understood and more difficult to manage, the default arrangement for Machine Learning teams is to bring ML folks together in one team that is managed by a technical (ML) manager.

This setup of separating out ML teams has multiple advantages:

  1. Creates a sense of community for ML folks and provides a good setup for ML knowledge sharing including tooling.

  2. Because the ML work is gated by a technical ML manager, this should mean the work coming to ML folks has already been vetted by an ML person who is also a manager. When ML folks are part of a cross functional team in an org that doesn’t understand ML very well, they are sometimes asked to work on non-ML things, which they hate, and is a global loss of efficiency for the company. 

However, the disadvantages are that you lose out on the advantages of cross-functional teams:

  1. A lot more overhead, loss of speed, and loss of autonomy for individual contributors.

  2. More importantly, you lose out on innovation. When ML teams are separated out, ‘requests’ have to go to them through a technical manager who doesn’t have the full business context. Exaggerating just to make the point, this setup is like a service desk that is quite removed from the business/user problem. 

Cross-functional ML teams

I strongly believe that integrating ML folks in cross-functional teams is the best org structure for the following reasons:

  1. In a dynamic ever-evolving tech scene, prioritizing the structure that optimizes for innovation and access to business context increases your chances of success.

  2. While splitting out ML folks into different teams might lead to a loss of a sense of community, this can be compensated for rather easily. Technical folks love to build technical communities, and the technology is the same if people change companies/orgs. However, it is extremely difficult to imbibe business context that’s not provided by your immediate org structure. Business/product goals/context vary company over company and org over org.

  3. ML folks are sometimes worried that being part of a cross-functional team will lead to them working on non-ML things. If true, this concern is by nature temporary, and should go away with time/education as the company gets more familiar with ML.

  4. If you are worried that ML folks need an ML direct manager, you can adopt Spotify’s matrix structure

Tiger teams

If your ML team is quite small because you’re in a startup or just starting out an ML journey, you might have ML folks serving different teams by necessity, and it makes sense to have them grouped into an ML team.

One way to get the best of both worlds that I’ve used in the past is Tiger teams. You can maintain a technical ML team, but clearly assign ML folks to a product team, or part of it, for a fixed duration (such as a quarter or say 3-sprints) to solve a certain problem. Create a temporary Tiger team comprised of ML and non-ML folks, give it a name if you have team names, create a separate Slack channel for it, and have it conduct its own Agile rituals. Do whatever it takes for that team to really feel like one team.

Where do I start

No org structure is perfect; each one has pros and cons. My best advice is to be aware of the inherent cons of the structure you have and compensate for them. For example, a technical ML team achieves a certain degree of success if its manager has good eye for business and strives to “integrate” his team members into the business/product problem.

If you are not familiar with Spotify’s Engineering culture, I highly recommend taking a look. Marty Cagan has a lot of advice on achieving an Empowered product team structure, and both of these resources are great, even if we set aside the ML question.

Team structures is a topic I’m passionate about, and I’m always happy to chat and help out people on their journeys to unlock the potential of their teams. So send me a message!

Read More
Nahla Salem Nahla Salem

On Enterprise Product Management

My 16 year software experience has all been in the Enterprise/B2B space, as well as my 8 year PM experience. Now in Yelp, I’m working on the local business side of our marketplace.


I recently attended a Lean Product Meetup talk by Dan Olsen, about Enterprise Product Management. Dan shared the challenges in Enterprise PM that differentiate it from consumer PM, and how to work through them. I found the list to be very relevant, so here goes:


1- Clients dictating specific solutions
2- Having to satisfy both buyers and end users
3- Sales influencing product priorities
4- Limited access to customers
5- Prioritizing across multiple market segments
6- Inflexible product release deadlines
7- Struggling to apply an MVP mindset

Listing these challenges may make it seem like an impossible task to apply product practices in an Enterprise product setting. But I can tell you that over the years, I have seen Enterprise products adopt more consumer product practices, and I mean in a good way. Think of Product-led growth in Enterprise products, which didn’t exist a few years back, but has been applied very successfully by products like Slack and Zoom.

Culture changes slowly but it changes over time. For that shift to happen PMs need to be aware of these challenges and be the beacons of product strategy -and they need to have a backbone to do it. Otherwise they will end up being deal brokers ferrying information between stakeholders.

Watch Dan’s talk here for details on how to tackle the above challenges and read my previous blog post on PM backbone here. Enjoy!

Read More
Nahla Salem Nahla Salem

Are product managers consensus-builders?

One counterintuitive requirement for successful product managers:

Product management is usually seen as a job requiring great communication skills, stakeholder management, and consensus building. This is true, of course, however in my opinion, it’s more important for PMs to be unrelenting in standing up for and pursing their ideas. I don’t mean to go off on hunches, but I mean to form educated opinions based on a deep understanding of a problem area, identifying user personas, and their needs/wants.

Companies that prioritize consensus between internal stakeholders have PMs that act like deal brokers, working on ideas that everyone agrees on. By definition, consensus draws you to the mean. You will avoid the extreme of bad ideas, but you will also avoid innovative ideas. One successful-Netflix anecdote is Reed Hastings stopping shows because they were “too successful”, which meant Netflix wasn’t failing enough and his team was not taking enough risk.

Consensus is good -but not all the time. If you’re not failing, you’re not taking enough risk. One reason might be your decision-making culture.

Read More
Nahla Salem Nahla Salem

Building Trustable Enterprise Data Science Products

It all begins with an idea.

This article was published in December 2018 on Kernel, the blog of Rubikloud, where I worked at the time. Rubikloud has since been acquired by Kinaxis.

At Rubikloud, we deliver Intelligent Decision Automation for the Enterprise. We have two products; Price & Promotion Manager, which delivers automated mass promotional demand forecasting at chain, store, and product levels, and Customer LifeCycle Manager (CLCM), which leverages Data Science to understand customers and automatically generate curated experiences across various channels and customer touchpoints.

Screen-Shot-2018-12-06-at-1.23.59-PM-1024x453.png

Make it stand out

Whatever it is, the way you tell your story online can make all the difference.

Our models live in a rich ecosystem as shown in the above diagram. The type of Product Management we need to make these products successful requires strength in and management of:

  • The Retail business: Merchandising for Promotion Manager and Marketing/CRM for CLCM. Merchandising departments plan and develop strategies to enable a retailer to sell a range of products en masse to deliver sales and profit targets. Marketing departments are responsible for managing a retailer’s brand and conducting campaigns that serve the retailer’s marketing initiatives.

  • Enterprise Services: What it takes to roll out Enterprise products that integrate with systems and processes on the client side.

  • Data Science: How to productize a model and translate results to business impact.

  • Software Engineering: How to build reliable scalable infrastructure and workflows to support our models and the huge amounts of data we ingest and use.

Fig 2: Aspects of Rubikloud Product Management

Fig 2: Aspects of Rubikloud Product Management

You can start to get an idea of how unique the products we build are. Most software products out there have only a couple of the above aspects. Each of these different aspects warrants a blog post of its own, so for the rest of this post, I’ll focus on an aspect of building Data Science products; building products that users trust.

Building Trustable Data Science Products

As Data Science permeates businesses more and more, data science players and business decision makers are finding ways to integrate and adopt data science. Topics such as Model Explainability and “Human-in-the-loop” approaches are being discussed; the reason being that a big part of achieving Data Science adoption in the Enterprise is making sure that users have a basic understanding of the Data Science system/product, and trust it, versus think of it as a black box.

For Rubikloud, the products we build make predictions and forecasts that influence business decisions on our clients’ side. The processes we affect, be it sales forecasting or marketing management, are well-established in most retailers. It is not easy to ask business users to “trust the machine” for decisions they have made themselves for the longest time, even if these decisions are only based on personal experience and heuristics.

From a product management standpoint, we are cognizant of this challenge. To lay the foundation, our products are meant to aid business decision making, and not replace the human factor altogether. Our products are essentially systems of insights and predictions that are relevant to and integrated with the business of our clients, and that includes features specifically designed with levers that users can control.

Here are some of these features:

1. Business Rules

One of the reasons why business users sometimes find it hard to ‘let go’ of decision making is because retailers make a lot of adjustments and exceptions on the level of an individual campaign or a sales forecast. We have developed a Linear Programming module that allows us to optimize for a certain business objective while honouring constraints that we expose to the user as business rules. Users can turn on and off and whose thresholds they can configure through an intuitive UI that uses terms the business users are familiar with.

2. Sanity Validations

The outputs generated by our systems typically need to be verified and validated. Verification is a fairly straightforward technical undertaking that we have automated, and typically includes verifying whether the number of predictions generated are within a certain range.

Validating outputs is not as straightforward, because you want to make sure the outputs “make sense” from a business perspective. We have developed modules that intelligently carry out validations by automating what a human data analyst would do to validate results, such as checking that a certain output follows a certain distribution.

We call these “Sanity Validations” and have also exposed them in the UI so users can turn them on/off and configure warning and severe thresholds. With each iteration of output generation, the results of these validations are sent to users, who can make an educated decision about the quality of the output. Our Data Analytics team had a lot of input in the development of these modules, as the team houses a mix of business and technical experience.

3. Output Sample Review

Where relevant, we allow our users to review a random sample of the outputs, accompanied by contextual information that allows them to assess its quality. We also provide general statistics on the outputs so clients can make sure the outputs meet their expectations. This is particularly important for outputs that are “hard to assess” such as the outputs of recommendation systems.

4. Override Model Outputs

We know that in some cases the business users will have knowledge that’s not translatable to data, or at least data that we ingest regularly. In these cases, the users will want to override a prediction that our algorithms have made. This may happen either if the user “feels” the prediction is incorrect, or if the user has information, outside the product, that affects this prediction. We provide features for the users to do that in our interface, whereby the users can choose to override a specific prediction. We have found that, with time, as our models improve in performance, business users trust the machine more, and the use of override features declines.

Fig 3: Use of the ‘Override’ feature over time

Fig 3: Use of the ‘Override’ feature over time

5. Users affecting model parameters

We use a probabilistic approach for certain predictions our products make, and our systems suggest best practices for these approaches. As our products matured, we exposed in our UI options for power users to control some parameters such as safety scores and confidences based on their knowledge of external events. And in order for the user to do this in an educated way, we provide them with context on how a certain prediction compares to the history of this predicted event.

A summary of my tips for the topic would be:

  • When building Data Science products, avoid building a black box product and design for features that allow domain experts to affect results.

  • Spend as much time making data science models user-centric as you do making models elegant, if not more.

  • It takes many iterations to get the marriage of data science and business right. Plan with that in mind.

You can read more about our Data Science and Product & Analytics work.

Related topics are the blog post “AI: The Next Evolution of Automation” by our Chief Data Scientist, Brian Keng, about automating AI systems.

And the blog post “Sheltering Models: Machine Learning Engineering as a Gradual Need” by our Data Science Manager, Javier Moreno, for more about our RkLuigi library that we developed to plumb production-grade machine learning systems with tremendous amounts of data at the core.

Read More