Sous Vide For Nerds (With Limited Cooking Experience)

Sous Vide For Nerds (With Limited Cooking Experience)

Something a little different to share today, but important if you are (a) not especially gifted/interested in cooking, (b) love great food, and (c) are a bit of a nerd. Sous vide is the technique, and the Joule is the solution.

Sous vide is a method of cooking that involves putting food in a bag and submerging it in a water pan that is kept at a regulated temperature. You then essentially slow-cook the food, but because the water that the food is in is so consistent in temperature, it evenly cooks the food.

The result of this is phenomenal food. While I am still fairly new to sous vide, everything we have tried has been a significant improvement compared to regular methods (e.g. grilling).

As an example, chicken is notoriously difficult to cook well. When I sous vide the chicken and then sear it on the grill (to get some delicious char), you get incredible tender and juicy chicken with the ideal grilled texture and flavor.

Steak is phenomenal too. I use the same technique: sous vide it to a medium-rare doneness and then sear it at high heat on the grill. Perfectly cooked steak.

A particular surprise here are eggs. When you sous vide an egg, the yolk texture is undeniably better. It takes on an almost custard like texture and brings the flavor to life.

So, sous vide is an unquestionably fantastic method of cooking. The big question is, particularly for the non-cooks among you, is it worth it?

Sous vide is great for busy (or lazy) people

Part of why I am loving sous vide is that it matches the formula I want to experience in cooking:

Easy + Low Effort + Low Risk + Minimal Cleanup = Great Food

Here’s the breakdown:

  • Easy – you can’t really screw it up. Put the food in a bag, set the right temperate, come back after a given period of time and your food is perfectly cooked.
  • + Low Effort – it takes a few minutes to start the cooking process and you can do other things while it cooks. You never need to babysit it.
  • + Low Risk – with sous vide you know it is cooked evenly. As an example, with chicken, it is common to get a cooked outer core (from grilling) and it be uncooked in the middle. As such people overcook it to prevent the risk. With sous vide you just have to ensure you cook it to a safe level and it is consistently cooked.
  • + Minimal Cleanup – you put the food in a bag, cook it, and then throw the bag away. The only pan you use is a bowl with water in it (about as easy to clean as possible). Perfect!

Thus, the result is great food and minimal fuss.

One other benefit is reheating for later eating.

As an example, right now I am ‘sous vide’ing’ (?) a pan full of eggs. These will be for breakfast every day this week. When the eggs are done, we will pop them in the fridge to keep. To reheat, we simply submerge the eggs in boiling water and it raises the internal temperature back up. The result is the incredible sous vide texture and consistency, but it takes merely (a) boiling the kettle, (b) submerging the eggs, and (c) waiting a little bit to get the benefits of sous vide later.

The gadget

This is where the nerdy bit comes in, but it isn’t all that nerdy.

For Christmas, Erica and I got a Joule. Put simply, it is white stick that plugs into the wall and connects to your phone via bluetooth.

You fill a pan with water, pop the Joule in, and search for the food you want to cook. The app will then recommend the right temperate and cooking time. When you set the time, the Joule turns on and starts circulating the water in the pan until it reaches the target temperate.

Next, you put the food in in the bag and the app starts the timer. When the timer is done your phone gets notified, you pull the food out and bingo!

The library of food in the app is enormous and even helps with how to prepare the food (e.g. any recommended seasoning). If though you want to ignore the guidance and just set a temperature and cooking time, then you can do that too.

When you are done cooking, throw the bag you cooked the food in away, empty the water out of the pan, and put the Joule back in the cupboard. Job done.

Now, to be clear, there are many other sous vide gadgets, none of which I have tried. I have tried one, the Joule, and it has been brilliant.

So, that’s it: I just wanted to share this recent discovery. Give it a try, I think you will dig it as much as I do.

Joining the data.world Advisory Board

Joining the data.world Advisory Board

I have previously posted pieces about data.world, an Austin-based startup focused on providing a powerful platform for data preparation, analysis, and collaboration.

data.world were previously a client where I helped to shape their community strategy and I have maintained a close relationship with them ever since.

I am delighted to share that I have accepted an offer to join their Advisory Board. As with most advisory boards, this will be a part-time role where I will provide guidance and support to the organization as they grow.

Why I Joined

Without wishing to sound terribly egotistical, I often get offers to participate in an advisory capacity with various organizations. I am typically loathed to commit too much as I am already rather busy, but I wanted to make an exception for data.world.

Why? There are a few reasons.

Firstly, the team are focusing on a really important problem. As our world becomes increasingly connected, we are generating more and more data. Sadly, much of this data is in different places, difficult to consume, and disconnected from other data sets.

data.world provides a place where data can be stored, sanitized/prepped, queried, and collaborated around. In fact, I believe that collaboration is the secret sauce: when we combine a huge variety of data sets, a consistent platform for querying, and a community with the ingenuity and creative flair for querying that data…we have a powerful enabler for data discovery.

data.world provides a powerful set of tools for storing, prepping, querying, and collaborating around data.

There is a particularly pertinent opportunity here. Buried inside individual data sets there are opportunities to make new discoveries, find new patterns/correlations, and use data as a means to make better decisions. When you are able to combine data sets, the potential for discovery exponentially grows, whether you are a professional researcher or an armchair enthusiast.

This is why the community is so important. In the same way GitHub provided a consistent platform for millions of developers to create, fork, share, and collaborate around code…both professionals and hobbyists…data.world has the same potential for data.

…and this is why I am excited to be a part of the data.world Advisory Board. Stay tuned for more!

Video: Measuring Community Health

Video: Measuring Community Health

One of the most challenging components of building a community is how to (a) determine what to measure, (b) measure it effectively, and (c) interpret those measurements in a way that drives improvements.

Of course, what complicates this is that communities are a mixture of tangible metrics (things we can measure with a computer), and intangible (things such as “enablement”, “happiness”, “satisfaction” etc).

Here is a presentation I delivered recently that provides an overview of the topic and plenty of pragmatic guidance for how you put this into action:

If you can’t see the video, click here.

Clarification: Snappy and Flatpak

Recently, I posted a piece about distributions consolidating around a consistent app store. In it I mentioned Flatpak as a potential component and some people wondered why I didn’t recommend Snappy, particularly due to my Canonical heritage.

To be clear (and to clear up my in-articulation): I am a fan of both Snappy and Flatpak: they are both important technologies solving important problems and they are both driven by great teams. To be frank, my main interest and focus in my post was the notion of a consolidated app store platform as opposed to what the specific individual components would be (other people can make a better judgement call on that). Thus, please don’t read my single-line mention of Flatpak as any criticism of Snappy. I realize that this may have been misconstrued as me suggesting that Snappy is somehow not up to the job, which was absolutely not my intent.

Part of the reason I mentioned Flatpak is that I feel there is a natural center of gravity forming around the GNOME Shell and platform, which many distros are shipping. Within the context of that platform I have seen Flatpak commonly mentioned as a component, hence why I mentioned it. Of course, there is no reason why Snappy couldn’t be that component too, and the Snappy team have been doing great work. I was also under the impression (entirely incorrectly) that Snappy is focusing more on the cloud/server market. It has become clear that the desktop is very much within the focus and domain of Snappy, and I apologize for the confusion.

So, to clear up any potential confusion (I can be an inarticulate clod at times), I am a big fan of Snappy, big fan of Flatpak, and an even bigger fan of a consolidated app store that multiple distros use.? My view is simple: competition is healthy, and we have two great projects and teams vying to make app installation and management on Linux easier. Viva la desktop!

Consolidating the Linux Desktop App Story: An Idea

Consolidating the Linux Desktop App Story: An Idea

When I joined Canonical in 2006, the Linux desktop world operated in a very upstream way. All distributions used the Linux kernel, all used X, and the majority shipped either GNOME, KDE, or both.

The following years mixed things up a little. As various companies pushed for consumer-grade Linux-based platforms (e.g. Ubuntu, Fedora, Elementary, Android etc), the components in a typical Linux platform diversified. Unity, Mir, Wayland, Cinnamon, GNOME Shell, Pantheon, Plasma, Flatpak, Snappy, and others entered the fray. This was a period of innovation, but also endless levels of consternation: people bickering left, right, and center, about which of these components were the best choices.

This is normal in technology, both the innovation and the flapping of feathers in blog posts and forums. As is also normal, when the dust settled a natural set of norms started to take shape.

Today, I believe we face an opportunity to consolidate around some key components, not just to go faster, but to also avoid the mistakes of the past.

App Stores are Hard

Historically, one of the difficulties with shipping a Linux desktop was differentiation.

I remember this vividly in my days at Canonical. People always praised Ubuntu for two main reasons: (1) you could get the exciting new technology in Ubuntu first, and (2) shit just worked.

While the latter was and is always key, the former was always going to have a short shelf life. While enthusiasts are willing to upgrade their desktops every six months, businesses and non-nerds are not, so Ubuntu needed to have a way to differentiate.

The result of course was Unity, Scopes, and the Ubuntu Software Center (and associated developer program). Here’s the thing though: building an app store is relatively simple, but building the ecosystem which makes developers want to get their applications in that store is really hard.

Pictured: An app store that is almost finished.

Most app developers and ISVs don’t care about your product, they care about the size of the market you can expose their product to. They also care about a logical revenue model and simplicity in delivering their app on your store: they want to make money without jumping through hoops.

Building all of this requires huge amounts of work, including engineering, developer engagement, on-boarding, and business development. We took a pretty good swing at it in Ubuntu and it was hard, and Microsoft poured millions of dollars into it for their phone and even that didn’t work.

The moral of this story is that differentiation is important, but we have to be realistic in what it takes to differentiate at this level. I think if we want the Linux desktop to grow, we have to strike the right balance between differentiation (giving people a reason to use your product) and consistency (not re-inventing the wheel).

Now, critics will say that they knew this all the time and Ubuntu should have never focused on Unity, Scopes etc. I don’t believe it is as clear cut as those critics might think: few Linux platforms (if any?) had taken a series whack at building a consumer grade app and developer experience. We tried, it was not successful, and instead of digging up the past I would rather ensure we can inform the future.

The good news is that I think we have a better opportunity for this than ever before.

Building a Standard Linux Desktop Core

What I want to see is that the various distributions put at the core of their platform a central app repository that is based on Flatpak, complete with the ecosystem pieces (e.g. an interface for developers to upload their apps, scripts for scanning packages for security issues, tools to edit app store pages, a payments mechanism to support the purchasing of apps etc).

All distributions would then use this platform instead of trying to reinvent the wheel, but they could customize their own app store experience and filter apps in different ways. For example, a GNOME-based distribution may only want to pull in GTK-based apps, another distro may only want want to support free software apps, another distro may only want apps written in a certain language. This way, no-one is forced into the same policy about what apps they ship: the shared app platform is a big bucket that you can pull the right pieces from.

This would have a number of benefits:

  • We consolidate resources around a central platform.
  • From my experience, app developers and ISVs are freaked out about the Linux world due to all the different platforms. This would provide a singular way of addressing Linux as a platform.
  • We provide a single set of usage data to app developers and ISVs (instead of an individual distro’s stats for downloads, we can show all distros that use the system for download stats). This is an important marketing benefit.
  • Better security: updates can be delivered to multiple different distributions.

Now, of course, this will require some collaboration and there will be some elephants in the room to figure out.

Yep, it is the elephant in the room. Bad dum tish.

One major elephant will be whether this platform supports non-free software. To be completely blunt, unless we support non-free apps (e.g. Slack, Steam, Photoshop, Spotify etc), it will never break into the wider consumer market. People judge their platforms based upon whether they can use the things they like and irrespective of the rights and wrongs in the world, most people depend on or want non-free apps. Of course, I wish we could have a free software and open source technology world like the rest of you, but I think we need to be realistic.

This wouldn’t matter though: distros with a focus on free software can merely filter only the apps that are free software for their users. For another distro that is open to non-free apps, they can also benefit from the same platform.

This approach will offer huge value for companies investing in the Linux desktop too: reduced engineering costs (and expanded innovation), differentiation in how you present and offer apps, and the benefit of likely more app devs and ISVs wanting to ship apps (thus making the platform more valuable).

A Good Start

The good news is that today I feel we have a bunch of the key pieces in place to support this kind of initiative, including:

  • GNOME Software – a simple and powerful store for browsing and installing software.
  • Flatpak – Flatpak is a simple and efficient packaging format for delivering applications (I am recommending Flatpak instead of Snappy as Snappy seems to be more focused on the cloud and server side of things these days, and Flatpak isn’t well suited to cloud/server).
  • Wayland – Wayland is a modern display server.

I think if we took these pieces, brought them under the banner of something such as FreeDesktop, built support from the various distros (e.g. Ubuntu, Fedora, Endless, Debian, Elementary etc), I think it would be a phenomenally valuable initiative and really optimize the success of the Linux desktop.

I would love to hear your thoughts on this, share them in the comments. Good idea? Bad idea? Somewhere in-between?

UPDATE: It seems I inadvertently left the impression in this post that I was not supporting Snappy as a potential component here. Please see this post for a clarification.

Open Community Conference: Updates, CFP, Webinar, and Prizes

Open Community Conference: Updates, CFP, Webinar, and Prizes

A little while back I announced that I am starting a new conference called the Open Community Conference in conjunction with my friends at the Linux Foundation.

Put simply: the Open Community Conference provides a raft of presentations, panels, and BoFs with pragmatic guidance for building and engaging productive communities.

While my other event, the Community Leadership Summit provides a set of workshops for community managers to shape community strategy, the Open Community Conference presents easily consumable and applicable best practice for organizations and practitioners. It is an ideal event for those of you who want to learn pragmatic approaches for how to evolve community strategy with your products/services.

I am running the Open Community Conference in two locations this year:

The Open Community Summit is one of the major events as part of the Open Source Summit in each location.

Open Community Conference America Schedule Published

I am delighted to share that the schedule for the Open Community Conference in Los Angeles is now available here.

Some sessions I am particularly excited about include:

  • Aim to Be an Open Source Zero – Guy Martin, Autodesk
  • Building Open Source Project Infrastructures – Elizabeth K. Joseph, Mesosphere
  • Scaling Open Source – Lessons Learned at the Apache Software Foundation – Phil Steitz, Apache Software Foundation
  • So You’ve Decided You Need an Open Source Program Office – Duane O’Brien, PayPal & Nithya Ruff, Comcast
  • Why I Forked My Own Project and My Own Company – Frank Karlitschek, ownCloud
  • So You Have a Code of Conduct… Now What? – Sarah Sharp, Otter Tech
  • Bootstrapping Community – Colin Charles, Percona
  • Fora, Q&A, Mailing Lists, Chat…Oh My! – Jeremy Garcia, LinuxQuestions.org / Datadog
  • Open Source Licensing 101 – Jim Jagielski, Capital One Selling Open Source, * Keeping Your Soul – Jessica Rose, Crate.io
  • Venture Capital Community: Applying Open Source Principles to Disrupt a Traditional Industry – Cory Bolotsky, Underscore VC

There are many more sessions as part of the schedule too, covering a diverse range of areas.

I will also be delivering a keynote and an additional session called Building Predictable Community: Strategy, Incentives, Value, and Psychology.

Webinar: 24th July at 9.30am Pacific

I will also be running a webinar on Monday 24th July 2017 at 9.30am Pacific where I will talk about the conference and answer questions about community strategy.

Also, (and as a sneak peek, it hasn’t been announced yet 😉 ), if you post questions to me on Twitter with the #AskJono hashtag about community strategy, leadership, open source, innersource, or the conference, you can win 3 free tickets to the event (including all the sessions, networking events, and more).

All of the questions will be answered on the webinar.

Go and sign up for the webinar here!

Open Community Conference Europe CFP Closes 8th July

Finally, for the Open Community Conference in Europe, the Call For Papers closes on Sat 8th July 2017 (which is tomorrow as I write this).

If you are interested in sharing your pragmatic experience and recommendations about building powerful, productive, engaged communities, go and submit your your paper here.

UPDATE: the CFP is now closed.

Optimizing On-Ramps, Community Strategy, and On-Boarding

Optimizing On-Ramps, Community Strategy, and On-Boarding

As part of my work, I tend to write a lot of articles, participate in interviews, and various other things. Previously I have not done a very good job at sharing these things on my blog, but as the number of people who subscribe to my posts seems to be growing, I am going to make a point of sharing these pieces here.

So, here are some recent pieces that you might be interested in.

Designing For Participation: Take Your Site’s UX to the Next Level

This week a new article I wrote for Velocitize went online. It covers how every website for a product or project can be broken down into an on-ramp that we can optimize how we derive the outcome we want. From the piece:

Fundamentally, websites should (a) deliver information we want the reader to consume, and (b) encourage user behavior we want to see. For example, we might want to show someone our product and then have them sign up for a demo. Or, we might want someone to read and comment on our blog. First, sit down and think of these desired core outcomes. Now, for each, map out an on-ramp that breaks down how someone would get there.

The piece then walks through a sample on-ramp and how this can be used to break down the experience into pieces we can optimize:

The article also runs through a checklist of recommendations for optimizing a website, including:

  1. Design for laziness…and SEO
  2. Have a clear and simple navigation
  3. Deliver value for users without signing up
  4. Have a single call to action on each page
  5. Test extensively with real world users

Go and read the piece here.

What I’ve Learned…With Jono Bacon

Recently I was asked to join an interview with OpenChannel where they asked me about a range of topics including building a community narrative, structuring community strategy, gathering community feedback, trends in commercial communities, and more.

Go and read the piece here.

Bad Voltage: Wikipedia, On-Boarding, and Resolving Community Issues

I co-founded a podcast called Bad Voltage which covers technology, open source, and other topics.

In the most recent show we touched on an interesting research study into Wikipedia community on-boarding, how it was optimized, and the lack of impact in solving their broader on-boarding issues. In the segment I delve into why this wasn’t particularly surprising to me, and where and how we should focus on these kinds of challenges in communities.

Click play below to listen to the segment:

As usual, I recommend you subscribe to get updates with new posts, content, and recommendations direct to your email.

Innersource: A Guide to the What, Why, and How

Innersource: A Guide to the What, Why, and How

In recent years innersource is a term that has cropped up more and more. As with all new things in technology, there has been a healthy mix of interest and suspicion around what exactly innersource is (and what it isn’t).

As a consultant I work with a range of organizations, large and small, across various markets (e.g. financial services, technology etc) to help them bring innersource into their world. So, here is a quick guide to what innersource is, why you might care, and how to get started.

What is Innersource?

In a nutshell, ‘innersource’ refers to bringing the core principles of open source and community collaboration within the walls of an organization. This involves building an internal community, collaborative engineering workflow, and culture.

This work happens entirely within the walls of the company. For all intents and purposes, the company develops an open source culture, but entirely focused on their own intellectual property, technology, and teams. This provides the benefits of open source collaboration, community, and digital transformation, but in a safe environment, particularly for highly regulated industries such as financial services.

Innersource is not a product or service that you buy and install on your network. It is instead a term that refers to the overall workflow, methodology, community, and culture that optimizes an organization for open source style collaboration.

Why do people Innersource?

Many organizations are very command-and-control driven, often as a result of their industry (e.g. highly regulated industries), the size of the organization, or how long they have been around.

Command-and-control driven organizations often hit a bottleneck in efficiency which results in some negative outcomes such as slower Time To Market, additional bureaucracy, staff frustration, reduced innovation, loss of a competitive edge, and additional costs (and waste) for operating the overall business.

An unfortunate side effect of this is that teams get siloed, and this results in reduced collaboration between projects and teams, duplication of effort, poor communication of wider company strategic goals, territorial leadership setting in, and frankly…the organization becomes a less fun and inspiring place to work.

Pictured: frustration.

While the benefits of open source have been clearly felt in reducing costs for consuming and building software and services, there has also been substantive value for organizations and staff who work together using an open source methodology. People feel more engaged, are able to grow their technical skills, build more effective relationships, feel their work has more impact and meaning, and experience more engagement in their work.

It is very important to note that innersource is not merely about optimizing how people write code. Sure, workflow is a key component, but innersource is fundamentally cultural in focus. You need both: If you build an environment that (a) has an open and efficient peer-review based workflow. and (b) you build a culture that supports cross-departmental collaboration and internal community, the tangible output is unsurprisingly, not just better code, but better teams, and better products.

What are the BENEFITS of innersource for an organization?

There are number of benefits for organizations that work in an innersource way:

  • Faster Time To Market (TTM) – innersource optimizes teams to work faster and more efficiently and this reduces the time it takes to build and release new products and services.
  • Better code – a collaborative peer-review process commonly results in better quality code as multiple engineers are reviewing the code for quality, optimization, and elegance.
  • Better security – with more eyeballs on code due to increased collaboration, all bugs (and security flaws) are shallow. This means that issues can be identified more quickly, and thus fixed.
  • Expanded innovation – you can’t successfully “tell” people to innovate. You have to build an environment that encourages employees to have and share ideas, experiment with prototypes, and collaborate together. Innersource optimizes an organization for this and the result is a permissive environment that facilitates greater innovation.
  • Easier hiring – young engineers are growing up in a world where they can cut their teeth on open source projects to build up their experience. Consequently, they don’t want to work in dusty siloed organizations, they want to work in an open source way. Innersource (as well as wider open source participation) not only makes your company more attractive, but it is increasingly a requirement to attract the best talent.
  • Improved skills development – with such a focus on collaboration with innersource, staff learn from each other, discover new approaches, and rectify bad habits due to peer review.
  • Easier performance/audit/root cause analysis – innersource workflow results in a digital record of your entire collaborative work. This can make tracking performance, audits, and root cause analysis easier. Your organization benefits from a record of how previous work was done which can inform and illustrate future decisions.
  • More efficient on-boarding for new staff – when new team members join the company, this record of work I outlined in the previous bullet helps them to see and learn from how previous decisions were made and how previous work was executed. This makes on-boarding, skills development, and learning the culture and personalities of an organization much easier.
  • Easier collaboration with the public open source world – while today you may have no public open source contributions to make, if in the future you decide to either contribute to or build a public open source project, innersource will already instill the necessary workflow, process, and skills to work with public open source projects well.

What are the RISKS of innersource for an organization?

While innersource has many benefits, it is not a silver bullet. As I mentioned earlier, innersource is fundamentally about building culture, and a workflow and methodology that provides practical execution and delivery.

Building culture is hard. Here are some of the risks attached:

  • It takes time – putting innersource in place takes time. I always recommend organizations to start small and iterate. As such, various people in the organization (e.g. execs and key stakeholders) will need to ensure they have realistic expectations about the delivery of this work.
  • It can cause uncertainty – bringing in any new workflow and culture can cause people to feel anxious. It is always important to involve people in the formation and iteration of innersource, communicate extensively, reassure, and always be receptive to feedback
  • Purely top-down directives are often not taken seriously – innersource requires both a top-down permissive component from senior staff and bottom-up tangible projects and workflow for people to engage with. If one or the other is missing, there is a risk of failure.
  • It varies from organization to organization – while the principles of innersource are often somewhat consistent, every organization’s starting point is different. As such, delivering this work will require a lot of nuance for the specifics of that organization, and you can’t merely replicate what others have done.

How do I use Innersource at my company?

In the interests of keeping this post concise, I am not going to explain here how to build out an innersource program here, but to share some links some other articles I have written for how to get started:

One thing I would definitely recommend is hiring someone to help you with this work. While not critical, there is a lot of nuance attached to building the right mix of workflow, incentives, messaging, and building institutional knowledge. Obviously, this is something I provide as a consultant (more details), so if you want to discuss this further, just drop me a line.

Don’t Use Bots to Engage With People on Social Media

Don’t Use Bots to Engage With People on Social Media

I am going to be honest with you, I am writing this post out of one part frustration and one part guidance to people who I think may be inadvertently making a mistake. I wanted to write this up as a blog post so I can send it to people when I see this happening.

It goes like this: when I follow someone on Twitter, I often get an automated Direct Message which looks something along these lines:

These messages invariably are either trying to (a) get me to look at a product they have created, (b) trying to get me to go to their website, or (c) trying to get me to follow them somewhere else such as LinkedIn.

Unfortunately, there are two similar approaches which I think are also problematic.

Firstly, some people will have an automated tweet go out (publicly) that “thanks” me for following them (as best an automated bot who doesn’t know me can thank me).

Secondly, some people will even go so far as to record a little video that personally welcomes me to their Twitter account. This is usually less than a minute long and again is published as an integrated video in a public tweet.

Why you shouldn’t do this

There are a few reasons why you might want to reconsider this:

Firstly, automated Direct Messages come across as spammy. Sure, I chose to follow you, but if my first interaction with you is advertising, it doesn’t leave a great taste in my mouth. If you are going to DM me, send me a personal message from you, not a bot (or not at all). Definitely don’t try to make that bot seem like a human: much like someone trying to suppress a yawn, we can all see it, and it looks weird.

Pictured: Not hiding a yawn.

Secondly, don’t send out the automated thank-you tweets to your public Twitter feed. This is just noise that everyone other than the people you tagged won’t care about. If you generate too much noise, people will stop following you.

Thirdly, in terms of the personal video messages (and in a similar way to the automated public thank-you messages), in addition to the noise it all seems a little…well, desperate. People can sniff desperation a mile off: if someone follows you, be confident in your value to them. Wow them with great content and interesting ideas, not fabricated personal thank-you messages delivered by a bot.

What underlies all of this is that most people want authentic human engagement. While it is perfectly fine to pre-schedule content for publication (e.g. lots of people use Buffer to have a regular drip-feed of content), automating human engagement just doesn’t hit the mark with authenticity. There is an uncanny valley that people can almost always sniff out when you try to make an automated message seem like a personal interaction.

Of course, many of the folks who do these things are perfectly well intentioned and are just trying to optimize their social media presence. Instead of doing the above things, see my 10 recommendations for social media as a starting point, and explore some other ways to engage your audience well and build growth.

Subscribe

Pin It on Pinterest