Page 2 of 12

The Reforge Data Team is hiring

We are hiring for three analysts on the Data Team at Reforge. We’re looking for a Product Analyst, a Senior Product Analyst, and a Senior Marketing Analyst.

It’s a very exciting time for Reforge and for the Data Team.

Helping People do the Best Work of Their Careers

Reforge is an amazing opportunity to work on a product that has a significant impact on people’s lives. We are taking the untapped knowledge, frameworks, and practices from industry leaders and making it accessible to our customers. Our goal is to help our members do the best work of their career by unlocking insights and then helping them apply it back to their jobs right away.

We routinely here from our members that it was the best educational experience they’ve had (better than their MBA if they got one), and that it has helped them be more confident in their role and drive a big impact for their business.

$21 Million Series A Investment

We raised our Series A investment from A16Z in February. At the time, our CEO Brian Balfour wrote about:

  • The history of Reforge
  • Why there’s a real need for our offering
  • The solution we are building

I was lucky to be one of the earliest employees that took this from a nascent concept through today. It was an incredible time to be at the company, but today’s inflection point is even more exciting. While we bootstrapped the company to eight figures of revenue, our recent fundraising round gives us the capital to invest even more aggressively.

We are the rare startup that has real revenue with generous margins and profits, yet also has raised venture capital and the ambition to continue growing 100% each year.

The Data Team at Reforge

The data team is just getting started here at Reforge. We’re looking for people who want to be a part of a small team that is growing fast and are comfortable with ambiguity and fast paced change. It’s a great opportunity for people who want to be a part of building a strong data practice at a company that sees data as invaluable in operations as well as strategic decisions.

The nature of our product is that it’s cross functional. We create content, host events, and build relationships between people through marketing, community, operations, product, design, and engineering.

We have a central data team so that every part of the org has access to all of the data to deliver an exceptional experience. Every part of the experience should be tailored to your role, your company’s business model, where you’re located, how senior you are, what you’ve done in the platform, and what your goals are today. The data team is here to make sure teams have the information they need to deliver and iterate on an exceptional experience.

The data org owns the growth model for the business. It’s our job to help bring together the metrics for the whole business to understand how we’re trending towards our goals, what is performing well, where there is opportunity to improve, and what the greatest points of leverage are. This is especially important for a cross functional product like ours where opportunities require multiple groups within the company to collaborate.

What we’re looking for

We’re looking for someone who can take ambiguous questions, run independent analyses, and then clearly communicate their findings.

  • Analysts will be required to field questions from many potential sources: the leadership team, PMs, Marketers, Engineering, Design, Operations, or support. Being able to listen to the questions of teams, clarify their goals, and then come up with the right way to analyze and summarize findings is critical.
  • Often times the initial question asked is not the right one or needs to be clarified and adjusted. It is not an analysts job to mindlessly give teams the answers to their questions, but to push back when necessary and be a collaborator with their teams to gain insights that will push the product experience forward and improve the business.

An analyst should be deeply quantitative but naturally curious. This person should have opinions about software experiences and be passionate about finding ways to quantify value to end users, impact to the business, areas for improvement, and insights into behavior.

  • Teams may not always know the right question to ask and the best analysts have their own opinions that they pursue.
  • Analysts should understand how the product works, the user psychology of end users, and how the product is different from its alternatives. The quantitative metrics that are used to measure its performance are a direct result of finding ways to express a deep understanding of the product.
  • The best analysts are looking around corners to help uncover pockets of strength, quantify areas that are under-performing, and are thinking about the best ways to influence people about what’s important and should be focused on.

An analyst should be an excellent communicator, story teller, and consultant.

  • We are not looking for analysts that are simply responsible for producing charts. They will need to be able to understand the motivation behind a request, suggest alternatives, and have the conviction to push back when they disagree to foster debate.
  • We want analysts who are able to tell a story with the data, explain the technical elements of an analysis but communicate why it’s important.
  • We want analysts to be consultative with their peers – they need to understand what will be impactful and resonate with an audience and tailor the summary accordingly.

The Tools we Use

As much as possible, we use the best of breed technology stack available today. We have roots as a profitable, bootstrapped company so we haven’t upgraded all of our tools yet but we are constantly evaluating each of the tools to ensure we’re using the best tool for the job.

Our core sources of data:

  • We have a read-replica of our production database so that we can query the latest and greatest production data in real time.
  • We have a data warehouse that is populated hourly by our customer data platform, Segment.
  • We are able to seamlessly run queries that mix and match data between the two systems so that we can use the latest production data with an analysis that uses raw event based data or models computed in the data warehouse.

Some of the tools we use:

  • We use DBT for modeling and data transformation in our data warehouse
  • We use github for our DBT models and all of our code
  • We use metaplane to ensure that we are the first to know of any structural changes to our schemas as well as unexpected changes or trends within our databases.
  • We use iterable as our email marketing tool
  • We use Segment for our event pipeline, reverse ETL tool, and to populate our data warehouse with event level information from sources of data.
  • We use Airflow for any jobs that connect to external services and to create more sophisticated models and jobs that can’t be created in pure SQL.
  • We use Amplitude as our behavioral analytics tool to get insights about how people are using our product, key conversion paths and funnels, and retention behavior.
  • We use Metbase as our BI tool. I’ve written about examples of how we use it here and here.
A simplified version of our data systems

How Teams are Structured

We are rapidly building our teams and how they are structured, but our strategy is a product led experience. That means that our product team is responsible for the core experience of our members. We are structuring teams in pods that own discrete elements of the experience. While every pod may vary, they will typically have:

  • A product manager
  • A product designer
  • A Tech Lead
  • Multiple software engineers
  • A marketer
  • A product analyst

It will be the product analysts responsibilities to:

  • Be the expert on the team about data
  • Empower the team to do their own reporting and answer the vast majority of their own questions
  • Do deeper analyses than any other team member can do

If this is interesting to you, please apply here.

Stripe + Segment + Metabase

I’ve written about how we use Metabase at Reforge, and how I’m a big fan. It has allowed us to make data accessible to anyone in the organization, whether it’s for a deep analysis or for a quick status update on an important initiative.

We use Segment as one of the key pieces of data our data infrastructure, and I recently turned on the Stripe integration. I was pleasantly surprised by how well it works. I authenticated into our Stripe account to configure it as a source within Segment. Segment then updates tables in our data warehouse with the latest and greatest data from Stripe. This is a screenshot of our database and the tables under the Stripe schema:

On one of our key dashboards showing our progress in generating revenue, I wanted a cumulative revenue chart. This is helpful to see how quickly we’re generating revenue, what our total revenue in a period of time is, and how our revenue growth compares to previous periods.

This is a sample chart that’s easy to setup in Metabase:

To generate it, this is the SQL I wrote to generate it:

The way it works:

  • The first statement generates a table (that’s the generate_series function) that has a start date and end date
  • I join from that table with a left join to the stripe payments table. This ensures that if we don’t have a day of revenue, the day still shows up in our table. This table gives me a total amount of revenue per day.
  • In the final part of the statement, I use a window function to do a cumulative sum of all of the revenue per day, so that day 2 has all of day 1’s revenue along with day 2’s revenue added in.

I hope this helps you create similar charts of revenue per day or cumulative revenue reports. I’ve worked with teams where we’ve manually hit the Stripe API to pull all of this information, which is always a pain in the butt. The nice part of this solution is that Segment keeps the table up to date automatically, and then anyone can run these reports on the latest values in the data warehouse.

While we’re still an early stage company, the type of integration is super powerful. There are a lot of interesting applications of being able to access failed charges, successful charges, refunds, and the various types of paying customers when joined to other data sources.

How we use Metabase

When I joined Reforge a year ago, I found that we were querying our databases manually to do routine analysis. If we wanted to update the team on the number of people who had applied or paid for our programs, we’d run a query against the database and then put the results in a spreadsheet. If we wanted a list of users from our programs by company, we’d run a query and put it in a spreadsheet. While this answered our questions at the time, I felt like we could do a lot better. After having used Looker in my time at HubSpot, I wanted a lightweight solution to help us enable the entire company to have access to critical data about the business and make data-informed decisions. We started using metabase.

Metabase has been a huge help for me in democratizing access to our data. Metabase connects directly to any databases you want, and it allows anyone in the company (I have chosen not to set up advanced permissions yet) to manually inspect data, do advanced reporting, or view dashboards.

This is an example of what it looks like when someone looks at our program applications table (simple database table that keeps track of applications to our growth programs):

Benefits:

  • Team members can look at the table as if it were a spreadsheet
  • They can apply filters as if it was a spreadsheet
  • They can visualize the results more easily than in a spreadsheet

I routinely build reports in metabase that filter to people with a certain condition and send it to teammates. It’s so easy to report on people who work at company X that are in participating in one of our programs. Much better to generate a simple report and then share it with a colleague knowing that it should always be up to date, even if our underlying data is updated.

You can also easily switch between a table view, and many other ways to visualize the data (table, line chart, area chart, bar chart, row chart, scatter chart, pie chart, and a map):

Once you’ve filtered your data set and chosen how to visualize it, it’s easy to then add it to a dashboard of other reports. It’s really nice that you can combine data from multiple databases into the same dashboard, and drag and drop the charts in any configuration you want. This is a dashboard that I setup to monitor the performance of applications submitted to our recent cohort of programs, as well as how people were paying for their spot. It has a segmentation of which programs they’re applying to, how much revenue we’re generated, how we’re comparing to previous periods, and where people are submitting applications from:

There are a ton of other features that I am a huge fan of. Some of them:

  • Posting questions to slack at a regular interval:

Or via email:

It has been a huge help for me personally, and this doesn’t even cover all of the ways in which we use it. Best of all, it’s free and open source. We pay to host it ourselves via Amazon Elastic Beanstalk.

This kind of solution comes in incredibly handy in our overall data pipeline, especially when we can point it to a copy of our production database and our analytics data warehouse that is populated by Segment.

Email Cohort Retention

I have used and love behavioral analytics tools like Amplitude, Mixpanel, Heap, and Pendo. They’re life-savers if you’re a product manager, marketer, designer, analyst, or engineer focused on improving the product experience. If I was dropped into any company’s product management team, it would be one of my initial asks: point me to your data system and let me understand your metrics. Last year as I was helping to launch an email newsletter, I wanted to leverage the same type of analyses I did for products, but for email. I spoke with a couple of experts in the email industry to get ready to understand what to measure, and they told me to:

  • Monitor my engagement metrics by email provider
  • Remove non-engaging contacts from our email distribution list
  • Monitor my long term retention of cohorts of contacts

These felt like classic behavioral analytic problems in the product space, but email focused. I assumed that somebody was enabling this kind of analysis for the email, space, right?

Nope. I worked at HubSpot for five years, and I have so much respect for that product team. They’re badass, plain and simple (crazy smart, humble, and get stuff done). They built some simple features to answer some of these questions, but don’t provide retention across all of your email campaigns. Does mailchimp offer anything like this? Nope. What about AutoPilot, the company we were using when I joined Reforge? Nope. I did a quick search and I didn’t find any company that provides this type of feature.

One of the core things we teach at Reforge is that retention is king – it makes or breaks your company (acquisition, monetization, payback period, competitive advantage). So I set out to measure it.

It was pretty simple, once I got the pieces working together:

  • I turned on the AutoPilot source in Segment, and piped the data to our data warehouse. Luckily we’re not Amazon, so a simple postgres database easily housed the data for this new product.
  • I turned on the Sendgrid source in Sendgrid, then spent weeks going back with Segment’s support department figuring out how to properly configure webhooks so email activity data flowed into our data warehouse.
  • I wrote a Jupyter notebook that bucketed contacts into their weekly subscriber cohorts and then built retention heatmaps based on the email activity data from both our email marketing system and our transactional emails.
  • I ran a script that queried DNS for a domain’s email provider so I could segment the retention curves by email provider (g-suite, microsoft, aol, yahoo, etc).

The outputs looked like this (non-segmented charts):

This helped us to answer key questions like:

  • What percentage of our subscriber cohorts were active N weeks after they subscribed?
  • Did we have a sticky email newsletter? Did people still around long term?
  • Would we be able to sustainably grow our subscriber base over time, if we were able to keep acquisition constant / grow it over time?
  • How did our retention curves look by email provider?
  • Who were our most prolific consumers (forwarding emails to others, consuming regularly, etc)?
  • Who should we be removing from our distribution lists (so that the email providers weren’t hurting our sender and reputation scores)?

It made me ask myself, why don’t email companies provide this kind of functionality? Some thoughts:

  • Mailchimp, HubSpot, and companies like it are focused on all of the other aspects of email: helping people design emails, setup automation, and measure individual campaign performance. The bigger problem is not having enough contacts to email in the first place, not having a well designed email, or wanting to analyze a single campaign rather than look at the health of an entire contact database.
  • Cohort analysis is not something many people find intuitive, and is a relatively advanced topic. There are still many product teams that don’t measure it, and I expect it’ll come to marketing tools eventually.
  • This is a big company problem, and they’ll end up writing custom software to solve it for themselves. For everyone else, this isn’t a must have.

Is there some easier way to do this? Is there a company that enables this? Let me know, I’d love to use their less-buggy code. I am trying to clean up the code so it’s half respectable and will try to post when I can.

Being the closest to the customer

If you read about the strategy of successful tech companies today, it’s all about having “obsessive customer focus” (Jeff Bezos’ 2016 Amazon annual shareholder letter). You’ll hear that “whoever gets closest to the customer wins” (Drift), and that companies want to “solve for the customer” (HubSpot culture code). Ultimately, the question isn’t about whether you’re focused on your customers, but how you go about evaluating who they are, what they’re trying to accomplish, and how they’re interacting with your product. I recently started using a workflow that gives me a continual stream of feedback, allows me to go back and forth to dig deeper and clarify important questions, and then also easily share the results with my team. It also required no effort from our engineering team to setup and didn’t require additional budget.

I work at Reforge, and we’re an education company. We offer programs for those located in SF, but we also have an equivalent online-only experience. For a bunch of our key initiatives this year, I felt that I didn’t fully understand what brought people back to our web app after some time away, and I wanted to dive deeper to improve it. Rather than do a one off survey, I setup a campaign that runs continuously to deliver this feedback on a daily basis.

Using Segment to create our list of “alumni” to survey

When I arrived at Reforge we were already using Segment, and we ended up buying their personas add-on. I’m a relatively happy customer (and happy they just raised $175 million), but used it because we were already paying for it. I’m a big fan of using the tools available to you.

Segment has a feature called Audiences that lets you create lists of people. Since most of our important data attributes and events were already flowing through Segment, it was very easy to define small segments or large swathes of our user base through a simple editor. While I live and breathe SQL, sometimes it’s really nice to build it out in a GUI. Here’s what I built:

The nice thing about this is that when someone enters this audience, it means that they’re an alum, they’re not in our most recent cohort, and they have viewed our online material. The cool thing is that you could specify anything about the users that you have available (role, country, seniority, depth of engagement, type of user, organization, etc).

A feature of Audiences is that you know when someone enters the audience. The way I’ve structured the audience, this will mean that they came back to our site and it has been more than 90 days since their last visit – otherwise they’d already been a member. So it’s a cool way to know when someone has come back.

Send the list of people to Zapier

Segment then allows you to send this information to any of their supported destinations. I sent this information to Zapier:

Segment tells Zapier that the user has come back to our site, and Zapier then writes it to a google spreadsheet. This is what our looks like:

What you can see here is that a user from HubSpot named Kieran has revisited how to build a qualitative growth model. I am also using a Segment API to pull in their first name and the title of the last page they visited, in case I want to include that in my outreach asking for feedback. Segment is continuously sending data about people coming back to Reforge, and each time it happens Zapier is writing it to a Google Spreadsheet for me.

Email the people revisiting the material

I then have another Zap that takes the rows from this spreadsheet and emails the person from my personal G-suite account.

You can see what it looks like if I look in my sent folder within gmail (and you can see that people have replied to the email from Gmail’s threads):

Via this workflow, I’m automatically emailing people that are coming back to our site asking them what brought them back. I love getting a continual stream of this feedback. Because it’s in Gmail, I can go back and forth with them to clarify what they mean and to dive deeper to understand. If you have a huge user base, you can easily filter down the number of people you email with another step in Zapier (mod their user id by a number to make sure you don’t email too many people at once).

Collect all of the feedback in a Google Doc

At this point I’m automatically emailing everyone from the segment I care about, and I’m able to go back and forth to clarify any questions that I have or dig deeper. Rather than copy and paste their responses into a google doc to share snippets of feedback / soundbites, I hooked up another Zap to automatically pull in their feedback and put it in a google doc.

When the feedback emails come in, I setup a gmail filter to automatically apply a label to the email. I use Zapier to look for new emails under that label, and I exclude any emails from me (my replies to them). Zapier then puts the emails as a spreadsheet row in a google sheet:

Then I used a couple of simple formulas to combine all of the emails from a single user into a single row in another spreadsheet:

I separate the replies with a ———, so the above row represents multiple emails back and forth with this person. You can see that my first question was about uses cases, and then the third one was about the ones that come up frequently.

Column A is set to be “=UNIQUE(Emails)”. That means that there will only be one row for each email address I have feedback from. The formula for column B is “=ARRAYFORMULA(TEXTJOIN(CHAR(10) & “——–” & CHAR(10), TRUE, IF(Emails=A2,Response,””)))”

Array formulas are really cool, I’ve only had the need to use them a couple of times but I am always so impressed with their functionality. Basically this formula tells Google Sheets to combine all of the emails from users (remember, each row is a single user) together with the “——–” separator.

This is pretty powerful. Now I have a spreadsheet that has the entire conversation with someone in a single spreadsheet row, and I can share that spreadsheet with my entire team. We can then add columns to categorize feedback into buckets for easy filtering / reading.

Why I love this approach:

  • I get to read feedback from a critically important segment of users every day. I can define multiple segments to run simultaneously. The only limit is the limit on emails I can send from my Google account (2,000 messages), and the number of emails I have time to respond to.
  • I get to follow up with them in my main email tool
  • Their feedback then gets pulled into a spreadsheet automatically that I can share with my team members, categorized, and filtered.
  • While this isn’t the easiest thing to put together, this is so much easier than it used to be: writing complex SQL by hand, setting up a cron job to run this, writing a custom Gmail script, and then store this information in some database / google sheet). This is so much easier. If you have read this far and are thinking about building this – let me know I’d love to try it out first.

Is there an easier way to accomplish this? Let me know, I’d love to switch to it.

© 2024 Dan Wolchonok

Theme by Anders NorénUp ↑