Using Checklists For Onboarding Success, with Radoslav Stankov

radoslav-stankov-engineering-product-hunt

Radoslav Stankov is the Head of Engineering at Product Hunt. He actually took over the post from our last guest, Andreas Klinger, and Andreas wanted us to follow up with Rado to learn about some of the changes he’s made. 

In this episode, we dive into the impact that “single-player mode” has on remote work and how Rado uses checklists and clear expectations to ensure his team gets onboarded effectively.


Listen to the Full Episode Here:


Full Transcript:

Wes Winham:
Rado, thank you so much for joining us today on Scaling Software Teams.

Radoslav Stankov:
Thanks for having me.

Wes Winham:
So what is single-player mode and why should we optimize for single-player mode on our teams?

Radoslav Stankov:
Yeah, so single-player mode is basically one of enablers for protocol and how we stick to work. The idea of the single-player mode is that a developer should be able to execute from start to finish with having the least amount of blockers. For me personally, the thing which I defined as [inaudible 00:01:44] means automated decisions for trivial questions. And one problem with having bigger teams is you depend on other peoples. In my opinion, one of the biggest challenges in software engineering is dependencies. These can be from software dependencies, not [inaudible 00:02:05] gems, to people dependencies of, Okay, I cannot execute because I'm waiting for the design, or, I cannot implement this feature because the backend is not ready.

So what we do, what we say playing-single player mode means an engineer being able to work on a feature from the database design to deploying it to our servers, to writing the backend, exposing the API, writing the front end and adjusting the CSS. Handling everything. The goal is to be blocked as little as possible. If a developer misses a design, maybe just mock up the UI yourself and use some of the components you already have and fix it later. If you don't know how to do something technical, okay, just do a manageable hack and handle this. If you don't know about some business or product decision, just make the decision, And how much you can screw it up for one or two days and to get the show on. So that's basically what single-player mode is.

Wes Winham:
So we're reducing dependencies on decisions or technical pieces so that we can move faster and be more productive. That it makes sense to me. What are some things you've done to to enable better single-player mode on your team?

Radoslav Stankov:
Yeah, so basically what I like to say is we have the single-player mode, which is the thing which enables the execution of the company. And for single-player mode, we have something which I call enablers for single-player mode. One of the things is best practices. The idea there is you can imagine being in a team and everybody's doing their own stuff and it can get quite messy. So we are very focused on having best practices driven approach. For example, we have style guides automated, we have automated tests, or every code passes a pull request review. We try to not be messy and follow good engineering principles. So that's one of the enablers, to not have a mess from single-player mode.

The other one is automation. Everything we can automate, we try to automate as much as possible. Automation is a bit tricky, because if you're automate too early you are sticking to yourself. But in general it's good. For example, when you do a deployment it should just be one command when you deploy and you shouldn't put much effort into it. The other thing is we followed the Boy Scout rule, or the way I like to present it, pick up the trash lying on the floor. That is like you're an engineer, you work on something, and see something messy because we have done a transition, because we have done work. And what you do is, okay, let's pick up the trash, let's just clean up this thing. You shouldn't ask for permission to refactor or to clean up stuff. You should feel that this is your responsibility. This way the whole system better.

And the last two things are the small checkpoints, what we do is trunk [inaudible 00:05:23] development. We basically work with branches. Everything would go to master passes a pull request review. But the idea is every feature is split into deployable parts, and every pull request should be between one or two days at max. And it should be deployed behind a feature flag. We have a whole tooling set around that. So basically, if there is mistakes, they're very easily catched in production and the engineers are not spending... Okay, I'm spending one month working on this feature. When I have to integrate it, I have to re-base, I have to ship it and, okay, oh, I shipped it and I didn't thought that they have to do this database migration.

And if you do those things in a smaller checkpoints, if you do them every day, every day, every day, every day, and the feature is kind of enabled to production with the feature flag. You have the safety that our big releases, releases where in other companies the whole engineering thing was gathered around to create it to shut up the fires. With that, you just go to the admin and click enable to all the feature. And most probably this feature was enabled for a bunch of users beforehand so there isn't much pressure around that.

Then if something goes wrong with the feature, you just disable it, fix it, and you have this safety net. And the final part, it's also enabler for some of the other things we do, is we try to be a bit more data-driven oriented. So we try to not guess what happens but actually look at metrics. We are not very good yet at that, but we are getting better of how do you collect metrics, and how do we handle metrics into our own flow. And basically, those is the things I'm just saying you are is just quotes from our... I have this document which is called Tower Development Principles. And this is something we keep in our documentation repository and I share this with every engineer.

Wes Winham:
So here, the five things that we do to enable single-player mode: data-driven decisions, automation, small chunks, lots of best practices. And I think there was one other one in there that I forgot.

Radoslav Stankov:
Yeah, the Boy Scout rule.

Wes Winham:
Boy Scout rule. Yeah, I like those. So if the whole team is following those principles, then it enables people to work without dependencies on each other. Or fewer dependencies on each other, I would say.

Radoslav Stankov:
Yeah. I mean you always have the dependencies but it should be the exception now the rule. And also for for enabling the single-player mode, everybody should be very comfortable of working. Usually the fear is something which holds people back. I'm an engineer. Okay, I'm a bit scared to not implement the design as it is, because there is this obvious flow with it when you [inaudible 00:08:26] which you have data. You shouldn't be scared of that. It's your teammates, everybody wants you to do a good job, and you just do this.

I'm scared because this is a big feature, and if I deploy it, I'm scared that I'm bringing the whole system down. Okay. How we can split this into chunks and we can validate that each part works so we don't bring the system down, and stuff like that. Basically, trying to work in this manageable environment. Also the other extremists, you are fearless, you are a cowboy, you start shooting features, and everything becomes a mess and nobody wants to work on other people's features because they're a mess. And that's the reason, again, everything passes to practices, every piece of [inaudible 00:09:10] kind of the same, and so on and so forth.

Wes Winham:
Sounds like one of the key pieces there is breaking things into small enough chunks that you can deploy incriminally. I've seen engineers come from environments where they didn't have that kind of mandate and those principals struggle to break work down. As a leader, how do you help your team break the work in the small enough deployable chunks?

Radoslav Stankov:
One of the things I do first and foremost is... Again, I have this [inaudible 00:09:40] process and trying to talk them and teach to them how that this happens. And also I try to do a bit of retrospective approach. For example, a lot of times, especially the newer engineers, tend to always start with bigger chunks, because in their head it's the whole thing. I just let them do it and then we do a small retro. See how these break, how we can improve that, why is this happening. And I just let them have a bit of time to learn how to split their work into good chunks. Because, again, I'm still learning this myself in some ways.

And the idea there is, again, creating the safe environment. And yeah, you work a whole week on this feature. Maybe how you could have split it up, why you work a whole week on it? Some of the features can we get rid of those because it's too big? And again, just asking questions and trying to work. And also since everybody works together, people start... They're like rocks. When bonding together they start to follow their own shapes.

Wes Winham:
So during onboarding you cover that this is the principle and then for the first couple features maybe they do build it in one week or two week chunks. And then you go back and use a retro and say how might we have split into smaller? And that's kind of your tool for getting that learning together, rather than saying, "No stop, you can't do that." It's more, "How could we have done this?".

Radoslav Stankov:
Exactly, exactly. And often when we do bigger features, I sit with them or somebody else on the team, and decide okay how we are going to architect this. We had this big feature we are working on right now. It's a very internal thing, but it's very massive. And, okay, how we can make the transition from the new system, old system, how they can work in parallel, how we can deploy it, how we can make the state of things, how we can validate our bets. This comes a bit from the data-driven approach, how we can validate that this actually works. And this actually turned the feature around quite a bit. And basically, we have a roadmap how every day we should put some very small pieces. And they actually kind of validate the feature itself and to help us move forward.

Wes Winham:
You position your team as the MVP factory. What do you mean by that?

Radoslav Stankov:
The way Product Hunt internally works, basically we have two types of initiatives internally on the product level. One is improvements to existing properties. We work on quarters. Basically, at Product Hunt we sit and we planned projects which are taking one quarter. And the projects basically fall into two categories. First category is improvements to existing things. For example, last quarter we put a lot of improvements into all our commenting system, added polls to our comments, improved some of the spam detection there. we combined previous comments. And overall, improvements are something which already exist.

You need to do that because, again, there is two things you can do there because there is active users, you know what their needs are and it's kind of easy. The other things are basically, we actually do a lot of those. I call them MVPs, or moonshot. It's totally new product initiatives. It's things we never had before, and we try them and we see how they work. This year we launched our new browser extension. Again, it's a totally new initiative. It had accepted that was a [inaudible 00:13:26] had before, it has something which is called virtual coworking feature. We launched something called Product Hunt Stories, which is stories about makers. Like news article stories, narrative. We launched something called Founders Club, which is a new subscription service which bundles features for founders.

We are working on a new project which hopefully would be available in couple of weeks in a beta mode. It's called Your Stack. We already have the two-tier account and all of that, so this is like a bit moonshot. And those are projects which again we don't have. They start from kind of zero. Zero users, zero staff, and the approach there is a bit different from the one which is the improvements part. The improvements part, a lot of the traditional software planning systems kind of work well. A lot of data-driven approaches kind of worked well, because you already have the data. But when you ship an MVP or a moonshot it's very different approach.

Wes Winham:
If you had zero users it's tough to get data from zero users.

Radoslav Stankov:
Yeah, and you have to validate the idea, you have to see how it goes, how do you build it, where do you... Should I launch it, should I not launch it? Usually those have become bigger releases and stuff like that.

Wes Winham:
And and how do you structure those projects or your team to be well-suited to that kind of MVP moonshot work rather than... I think traditional Agile methods are an awesome fit for that polish. A bit of work where you kind of know what you're doing.

Radoslav Stankov:
So we are still working on that. Basically, I realized that we work in this way. We didn't do this for a couple of years now, and I realize that this the January that, especially with working with the new team members, that we work in this way. And basically, I think the first thing which really helped the team was actually explicitly say that we have those two types of projects and their approaches are different. And yeah, we are flying blind, but still we can do some of experiments to validate the project early on. And also, you should try to reuse as much stuff as possible from the previous stuff. Because when you launch a new product, you need to make a lot of decisions. And a lot of the stuff we already done, already have baked-in decisions which has been validated through the years, they work. You just use those, it makes your work faster.

Wes Winham:
So kind of like Blessed Path or Golden Path for technical decisions and UI and stuff?

Radoslav Stankov:
Exactly. Like for example, we are a very pushy into reusable stuff even more. A lot of organizations, yeah, you called it just as it's needed. For us, it's like, okay, how we can make this more reusable between projects. Because if the PVM fails, it's not a failure, it's a learning. But one thing we try to do from an engineering point of view is actually salvage technologies from it. For example, for Founder's Club, it's a subscription-based project. And for this project you have to implement a subscription service. A subscription way for users to subscribe to it, to cancel, to validate change. All this work, which is very standard, but it's very needed for this feature. It's not the breadth of the feature, but it's something for the site.

So it basically took us maybe like two or three extra hours to make this whole pipeline of the subscriptions more generic. So in the future when we add a new subscription service, we can just attach this logic to it and we can reuse the whole funnel. So the next time we have an MVP, we can just reuse the subscription models. For example, when we launched Stories, we had already built commenting system with threads and all of that stuff. We had already have the voting buttons. We had already built a whole feature flagging system. We have already an HTML editor so you can make the article building well. And this project contributed improving this editor even more, because it needed a lot more other features. And creating this can be reused when we create the project, which was the post or comments, attaching posts with options to every comment. The way we designed this, yeah, it wasn't the optimal design if you're just going to have this in one place. But we spent extra time to make this component being able to attach posts to any entity in our systems.

I mean, right now I think two places are going to get post, which is basically almost adding them for free. And you can add them, see how they work, do they work, and if they don't work just remove them from that new place. And this is how we try to approach that stuff. Kind of exploiting the engineers, right, who want to build a reusable things. And in a lot of traditional places, making stuff generic, it has a cost. And when you have this cost in our place, for us it's more in the way Product Hunt operates that's better than if we are just following the traditional pattern. That's again the context, the [inaudible 00:19:12]. Some things which work in one organization don't work in another and vice versa.

Wes Winham:
So where is the boundary for you? So reusability has its limits, right? There's a point where you make something generic and it's the agony you aren't going to need it. What are some heuristics you use to decode, hey, we could make this usable, but we're not going to?

Radoslav Stankov:
The thing about it, the border here is... Again, it's always learning. Sometimes you catch it correctly, sometimes you don't. Usually the approach I'm trying to put to something to be reusable. Is it easy? Is it an entity? For example, Pulse, it's something separate. It has a name, this thing has a name. We won't use the validation for a post option because it's not a big enough unit. The same thing with we are using React on the front end. And what I like about React is you think about components, and what we try to do there is, again, how many options this thing could have. If you can make something configurable with one or two options, try to make it a bit of reusable.

Sometimes we go overboard in some places, but it's, I think, in a matter of judgment in all situations. Also the way we are trying to approach problems is to be very aggressive namespacing and [inaudible 00:20:39] modules. And when you start separating your system into modules, those modules tend to have interfaces. And this is where you can add reusability. We have a module which is called polls or, in this case, subscriptions. And when this thing has a name, it's a big enough unit so you can actually use it. And, again, sometimes we overdo it, but, again, it's a balance.

Wes Winham:
So your heuristic is this names thing that we think as a big enough top-level piece. You're not trying to reuse a lot of utilities and helper validations. This is a top-level thing that we will care about.

So you have this MVP focus, you're launching a lot of new projects, you just listed off four new products, which is more than most teams do in multiple years. When you're hiring for engineers, how do you change your process to get folks that are gonna be successful in that lots of new product environment?

Radoslav Stankov:
Oh you mean about how do we onboard them or how do we hire them?

Wes Winham:
I would like to hear both. Both those are interesting.

Radoslav Stankov:
Okay, I'll tell you about the onboarding because I think I'm feeling a bit better there than the hiring part. Basically, for the onboarding, I have these protests of things the engineers should be able to do. The goal there is very simple. We do a lot of things which are very common between project and every engineer has to know the mechanics. Like this is something, in our opinion, a lot of engineers don't focus about is the mechanics of the work they do. Because a lot of times, for example, in mechanics we'll need something simple. We are using the GraphQL API. And how do you create a mutation? How do we add a new type there? How do you create a form? How do you create a database view? How do you do a big database migration?

All those things are mechanical things which you are doing over and over and over again. Yeah, we have tools and abstractions for them but you still have to know what's happening. And if you don't know that, you are wasting a lot of your energy of how to do those things and not thinking about the business problem. So the way I try to onboard people is some people say it's aggressive, but everybody who I spoke with, onboarded, they're quite happy with the approach. It doesn't sound so intensive. Usually I try to have them on the first day to commit code anything. Most times it's just a vacant [inaudible 00:24:07] with [inaudible 00:24:10] pictures of our reputation. Because, again, we don't hire a lot and people know how to start the app, but usually the new person sees something which is in there.

So I try to have them being able to do something the first day just to set up their environment too. Because their environment should be able to be set up less than a day. Everything there should be automated. I'm trying to have them on their first week, they should have something user-facing of the feature. And they should present this on our mandate all hands-on meeting team call where the whole company gathers. The idea is it can be a very simple feature, it doesn't need to be something big. It can be, for example... I added the whole word state on some buttons. Something user-facing, but presented to the whole company. Because most probably this is the time this person is being used to just to the team, and it's good that this person gets into that habit. And also we do a lot of demos of the work we do to show to the rest of the team.

So in this way they see something works. During the first a couple of weeks, they have daily one-on-ones with me, or hopefully in the future somebody else as well, where I help them guide the politics of what they're doing. So the first couple of weeks they're assigned to a bit of more maintenance work. So they are just working around different parts of the system and I have a checklist of things they have to implement. For example, create the React component, create a new page, create a new query, create basic mechanics so they actually see how it goes. On my side, I try to monitor what they're struggling with, what questions they ask me, and I keep them in a list. And based on this list I have them improve the commentation or input tooling, so they're actually improving the system as they go. And I am hoping that after some of the process, for example, during their first month, they have to remove something from our system. Usually this happens around week two. The idea there is you remove something, you remove a dead piece of code or remove a feature which is not needed.

The goal for this is to learn that this is not a sacred system. Everything is for grabs. Take it, remove it, we are quite open to factoring cleanups days. We have recently created a roadmap of things which we want to kill, and do this because a lot of people were scared when I have been in this choice and I go to the [inaudible 00:26:50] this is terrible. And it's there, I cannot remove. And everybody wants to remove it, but nobody has the guts and it just piles up, piles up, and increases the maintenance costs. And by the end of their first month, they have to create something usable. It can be just a motility function, it can be a component in React, it can be a Ruby class, it can be whatever. But they who should have to create something that can be used by somebody else. So they get into this mode of reusability.

And this is around one month, two month process. And around the third week I start using the codes, and that's basically it. That's how we put the people inside of it. And, again I'm writing in a list of all their questions, the places they're struggling with, showing them what they think they could improve, and also taking their feedback how we can improve. Because a lot of the stuff we have done is organic, and I really like to work with people. Especially right now we are hiring junior person. We are in final phases of that, and I cannot wait for this person to come and start asking questions and be confused about stuff, because their confusion is an opportunity to improve the whole system.

Wes Winham:
I love that system. So you start off with, you're going to commit on day one, you're going to demo on the first week's demo day, you're going to check all of the really low-level tactic things that you need to go high-level, and you're working your way up to building all of the blocks you need to kind of shift features by yourself. How are you keeping track of the candidates progress to this? Is this kind of a shared checklist you have or do you have your own kind of Google Docs somewhere that you're keeping track?

Radoslav Stankov:
I use the note taking app called Bear and I have a doc for all my teammates. I have their history what they work on, because everybody in the team has different strengths and different weaknesses. And I have a list there, okay, this person is an expert in Elixir. We don't use Elixir right now, but it's good to know, because sometimes we might need something when Elixir would be a very good fit. And I have, okay, this is the first week and they have this checklist. Have they done this, this, this, this, this, this, this. And I'm still like building, okay, what are the common things we do? For example, they should have done an AB test. But if we don't have an AB test, they most probably won't do it because we don't have it added to the feature [inaudible 00:29:32]. Basically, [inaudible 00:29:33] them to the whole everything which is mechanical and have to be done.

So I just have a list and I keep progress of that. And by this I'm also recording, okay, who has worked on what kind of projects, because I want to switch people between projects so we share knowledge and people don't get bored.

Wes Winham:
I love that system. Checklists are the most powerful tool in the universe sometimes, especially if they're thoughtfully applied. And you're changing this checklist as every new person, a junior engineer comes in. They teach you something, you add some checklists. Is that kind of the vibe I'm getting?

Radoslav Stankov:
Yeah, exactly. I started a bit late, but right now I'm adding a lot of lists and trying to have everything. Even though the act of writing the list helped me think about it, making the implicit explicit. And, again, I'm changing a lot of things. Again, I have places for questions people ask me. And sometimes it's, okay, today, I explained three times how we do email tracking. It's not that the people are not smart enough, that means something here is very complicated. How we can make it easier? And that's, again, my idea of the process because most of the time I have no idea what they're doing and what I'm doing. And I want to have, okay, let's try this, let's see what happens then move forward.

Wes Winham:
My guest today has been Rado Stankov. Rado, thank you for joining us today on Scaling Software Teams.

Radoslav Stankov:
Thanks, this was fun.

Tim Hickle