read

I have recently been on the look out for new employment opportunities, where I have noticed most applications (as opposed to job descriptions a few years ago) require you to be what they regard as a “full-stack” developer. It is assumed that “full-stack” carries a well known definition, accepted by all employers, where one can easily identify whether they fulfil this criteria. However looking closer at current trends in the development and operations space (not DevOps) it follows that perhaps the definition should be expanded.

I’m sometimes baffled by friends and colleges that claim to be “full-stack” developers having as little as a few months real programming experience, or where the foundation of their ability is sourced from online programming courses. It begs the question as to whether they believe software development is 100% programming and that knowing Ruby on Rails and React makes them a “full-stack” developer.

Looking more broadly at the software development scene, containerisation – and Docker in particular – is hard to ignore these days. Coupled with container scheduling solutions such as Kubernetes there is a huge push to hire for positions where the successful applicant can demonstrate skills in these technologies in additional to more traditional software development skills.

I propose a change to the definition of a “full-stack” developer to include the ability to handle operations too. This could mean Puppet, Chef, Capistrano, Kubernetes, networking or all of the above but some experience with servers and deployment should, in my honest opinion, be essential to the definition of “full stack”. Especially now where the line between developer and operations has become a little blurred (infrastructure as code) and where it should be the responsibility of the developer to deploy and maintain their code from conception to production.

I would be interested to read what others think of this and whether my opinion is well grounded.