Product development

We are a product-led company, and how we design and build our product and handbook is critical to our success. This page outlines the principles and practices that guide us.

See also our marketing & brand guidance for details like logos, colors, feelings, and other elements associated with our brand that play into product development.

Source availability

We are big believers in open source and open core software, and that’s why we support projects like OBS with donations.

Synura itself is distributed under our own closed-source, proprietary (but source available) license. What this means is we operate a lot like an open source project, even for our proprietary code:

  • Our source code is available for anyone to see, and we welcome merge requests from anyone.
  • Our issue tracker is public and anyone can participate.
  • Unlike open-source software, we do not grant permission to use the Synura software in production without a license that matches the features and seats you are using.

Product workflows

Our workflows within the product (creating an account, logging in, creating projects, browsing them, and so on) can be found in our initial mapping Figma document. This is an important resource for onboarding new team members as well as ensuring we’re on the same page when we talk about product concepts, so we try to keep it up to date.

Release and roadmap management

We release on a monthly cadence, with the release date being the last non-Friday working day of the month and the start of the milestone being on the first day of the same month. The release year starts in August with an x.0 release, and one increment is added per month (i.e., x.1, x.2) until x.11 is reached, at which point we’ve reached August again and we bump the first number (i.e., 1.11 to 2.0).

Roadmap

Our list of upcoming milestones can be found in GitLab. We maintain three specific release milestones, covering the next three months of what we’ll be building. However, beyond this rolling three-month period we use nearsighted roadmap buckets:

Issues and epics in the upcoming three releases will be quite detailed and broken down, while the further out you get into the 3-6 or 6-12 month buckets will be more prospective. Over time, issues and epics will be refined and move up from the 6-12 month bucket, to the 3-6 month bucket, and then into specific releases.

In any case, our roadmap is subject to change at any time as we listen to our customers. The best way to advocate for something to be prioritized is to participate in the issue or epic, including upvoting with a thumbs up.

Feature and marketing launches are separate

Note that we separate our feature launch from our marketing launch, and the activity that happens at the end of each month is the marketing activity. It’s the point where we’ll summarize and communicate about all the new improvements we’ve deployed, but changes are deployed to production continuously as they are finished. This means you may see features in an upcoming release live before the official release date.

Iteration

In line with our [iteration value, we iteratively build features to reduce the risk of going down a path where our customers don’t get meaningful value from what we build.

This iteration office hours recording from GitLab highlights some of the ways you can think about decreasing iteration size.

Minimum viability

What it means for features, products, and changes to be minimally viable is important to understand. It’s important to think of it not as delivering a broad base of capabilities at a minimal level, but rather as building up a single use case in a meaningful, differentiated way; nobody is inspired to adopt a creative tool that does the basics and nothing more. It should be inspiring; it should delight our users. It should include elements of emotional design.

MVP Pyramid

Similarly, each iteration must be valuable. First, build a skateboard, then a scooter, then a bike, and so on. Make sure that the users are getting value at each step, otherwise, we aren’t learning anything until, to use the example from the image, the car is already complete.

MVP Steps

Minimize merge request size

Just as with product design and requirements, iteration is important for how we code as well. The GitLab handbook contains a fantastic guide on how to keep code iterations small, including when to vertically or horizontally slice. See also our general guidance on merge requests

Commit early, commit often

Getting continuous feedback is especially important when working async. Because of this, apart from keeping every merge request small and focused, we try to commit frequently to branches so that progress can be seen by anyone and they can give feedback. It’s easy to leave a conversation thinking something is clear, and if you only share the branch when it’s done, you might not realize there was a misunderstanding until a lot of work has gone into it.

Instead, commit early and commit often and we’ll stay on the same page with each other better.

Iteration reviews

Another way we stay on the same page with each other is by setting and reviewing iteration goals on Monday every two weeks. We set 2-3 goals each week on the most important things to focus on (if we are having difficulty understanding or selecting a metric to track, check out the video lecture How to Set KPIs and Goals). These goals are likely to be primary user statistics, aiming for aggressive growth targets like 10% week over week (20% over a two-week iteration), or other customer development goals (for example, how many users we talked to in the last week). Goals may also be drawn from our monthly release milestones. Iterations and results are tracked publicly in our iteration review log.

As long as we haven’t yet publicly launched, the number of weeks left to launch is always included as a goal.

At the end of the iteration, we publish a post-review update, and a day or two after this (depending on when the investor call lines up), we discuss the results there also.

We do not have daily standups, planning, reviews, retrospectives, backlog refinement, or other meetings to enable this process. The goal is something simple that triggers interesting and important conversations and keeps everyone focused.

Post-review updates

At the end of each iteration review, the CEO posts an update to a variety of social media channels.

Here is the process:

  1. The CEO takes the output of the iteration review and prepares a blog post progress update covering the highlights. Apart from goals, status, and any necessary graphs (internal only SVG template). It can include things like:
    • What we learned from talking to our users
    • What most improved our primary metric
    • The biggest obstacle we faced
  2. The team reviews the progress report blog post, adding any additional important information/updates. In the future, this will also be the time we update a central changelog with new features that were delivered in the iteration.
  3. CEO records a video presenting the same progress update and adds it to the blog post.
  4. CEO adds the blog post link with video to the “Progress” playlist on our YouTube channel.
  5. CEO posts the video and 1-2 highlights to Twitter from our account.
  6. CEO retweets from his account and schedules the same post on our other media channels, including LinkedIn.
  7. If this was the last iteration of a given month, then we also send an investor update.

Avoid “LGTM” merge request reviews

This article talks about why it’s important not to have a habit of rubber stamping pull requests.

  1. Make pull requests as small as possible to avoid them being too big for the reviewer to understand, as noted above.
  2. Review your pull request by coming back at it a little later, with a fresh mind, and leaving questions/comments on anything you’re unsure about.
  3. Include clear instructions for reviewing and testing your pull request.
  4. Test your peer review process now and then to make sure it’s working.

Issue/epic boards

We use a basic version of Kanban as the starting point for our boards. The board statuses are in progress, blocked, and next.

All columns are stack ranked, with the most important items at the top.

  • Items that are open with no status set are in the backlog.
  • The status next indicates these are the ones the team plans to pick up next. No more than ten or so items here is a good rule of thumb, depending on size, as if you do more than that there can be a lot of shuffling. Typically product managers are managing this column, but everyone has a voice in prioritization.
  • blocked is for items that got stuck in some way, to help bring special attention to them. There should be an ongoing focus on blocked items and comments indicating what the problem is.
  • in progress means that someone has picked up the item and is working on it. Typically, a single person should only have one or two items in progress at any given time.
  • Items that are closed have either been delivered or are no longer relevant. You can open the issue/epic to determine which is the case, but generally, you should use PRs to see what changed, rather than issues. Be sure your PRs link to the issues you are working on so this is clear.

We want to keep this process as simple as possible so that everyone is empowered to pick up and work on the right things. Anyone can move any issue from any state to any state, just be sure you leave a comment explaining what you were doing and why.

User research and testing

We use Userbrain for user research (problem space exploration) and user testing (validation of functionality/prototypes). Userbrain can help you find people to talk to (they have a panel of users you can recruit from) and can facilitate tests with our users/contacts. The Userbrain blog also has various helpful resources on how to create good tests.

As a company we take talking to customers as part of ideation and building seriously, so don’t hesitate to run tests to validate your assumptions. We are exceptionally transparent as a company, and the best way to take advantage of that is by having as many open conversations as we can to refine our ideas.

Results of your user testing efforts should be uploaded to the user interviews folder in Google Drive (internal only to protect the privacy of interviewees). Create a dated subfolder for each group of interviews you do, and include a summary Google Doc or Spreadsheet in the folder to lay out your findings. When your summary is complete you should share a link to it in the #marketintel channel so everyone can check it out.

If more credits are needed for testing, contact Jason to add them.

Documenting technology decisions

We use Any Decision Records (ADRs) to make and document technical decisions. These are just markdown files with a pre-specified format.

If you need diagrams for your decisions, you can use our company Figma account, or a free Miro account (for UML, for example).

ADRs should be created as a merge request to our Console repository, and the discussion can be had in the MR prior to merging. The files should go in the docs/decisions folder.

Synura logo
Product Company
Features About
Pricing Handbook
Blog Contact
>> We're hiring! <<