Inaugural Industrial Strength Data Science Section meeting: 'We are not unicorns'

On Thursday 16 May, the RSS Data Science Section hosted its inaugural Industrial Strength Data Science event of the year at RSS HQ in London. The event consisted of a panel discussion on a range of topics centered around the current state of data science in industry today, and how external expectations are affecting the success or failure of data science projects and teams.

The experienced panel of data science practitioners included:

  • Adam Davison, head of insight and data science at The Economist (AD)
  • Kate Land, chief data scientist at Havelock London (KL)
  • Simon Raper, founder at Coppelia Machine Learning and Analytics (SR)
  • Magnus Rattray, director of the Data Science Institute at the University of Manchester (MR)
  • Piers Stobbs, chief data officer at MoneySuperMarket (PS)

The the event was hosted by Martin Goodson (CEO of Evolution AI and chair of the RSS Data Science Section). There was a lively debate, together with some excellent audience interaction and participation which continued over drinks later in the evening.

Key issues discussed in the meeting:

  • Data science hype is driving unrealistic expectations both from data scientists around what they will be working on, and from businesses about what they will be able to achieve.
  • To mitigate this, data science leaders need to work closely with business stakeholders and sponsors to clearly define the problems to be addressed and the actions to be taken on delivery of data science projects.
  • In addition, they need to recruit for more general skills including stats and coding as well as key attributes such curiosity and pragmatism and be clear with candidates on the type and variety of work that will be undertaken on a day to day basis.
  • Data science leaders need to drive buy-in for efficient data and analytics platforms and drive self-sufficiency within the data teams by leveraging engineering best practice and serverless cloud based services.

Watch the video here: https://youtu.be/4SqbE8SzyO8

Below is a more detailed summary of the key discussion points.

Effects of the hype

The meeting started with some comments around the evolution of data science as a capability, highlighting the positive benefits of bringing together quantitative practitioners from different functional areas of a business to share experiences and approaches. In academia, Magnus Rattray explained how historically, the techniques currently found in data science were predominantly explored in maths and computer science departments, but that there has been a move to where the data is generated- more physics and biology based research. This has led to more isolated researchers, and so the rise of the cross functional data science department has similarly reduced this isolation.

The discussion then moved on to questions around the effect of all the data science hype. Firstly the effects on practitioners - with all the hyperbole in the press, and the breakthroughs released by google on a regular basis, it is not surprising that many data science practitioners can feel they are 'not the authentic data scientist' (Kate Land) unless they are uncovering new deep learning architectures or working on petabyte scale problems. Of course this is one of the key purposes of these types of discussions, to demystify what actually goes on and highlight the fact that data science can drive incredibly positive impact in a business setting without needing to push the boundaries of research or reinvent the wheel. A key component of the recruitment process has to be explaining the type and variety of work expected from candidates and making sure this is aligned to expectations.

We moved on to discuss the hype effect on business, the fact that CEOs and business leaders are feeling pressured to invest in “AI” without really knowing what it is and how it can help. This can be a 'recipe for disaster' said Piers Stobbs, as teams of data scientists are hired without a clear remit and without the right infrastructure in place. 'You can’t do AI without machine learning, you can’t do machine learning without analytics, and you can’t do analytics without data infrastructure', said Piers Stobbs, quoting Hilary Mason. Businesses often jump to the top of the tree without building the foundations (pulling the data together in one place, data engineering).  'A lot of companies think they are ready for data science but are probably not,' Magnus Rattray commented.

Are these fundamental misunderstandings based on the hype contributing to a perceived lack of success? Likely so. One key component is having senior business leaders (chief data scientists or chief data officers) who understand more than the hype and can help educate decision makers to direct the efforts on tractable problems. What is the 'signal to noise of the problem'? asked Kate Land. It should be possible to differentiate between cat and dog images but predicting the direction of movement of a stock might not be in the data.

One final discussion point around hype was the benefits of embracing it. Although there was general consensus that true general intelligence (AGI) was still some way off, there were tangible benefits from a marketing and funding perspective to embracing the term. The Turing Institute successfully headed off other 'AI' focused entities by incorporating the term, said Magnus Rattray, and it might well be worth data science teams embracing the term despite any misgivings if only to avoid 'AI teams' springing up in the same organisation.

What does good look like?

An additional consequence of the hype is a recruiting process focused on buzzwords and methods because the recruiting manager doesn’t know what they need - 'we want someone who is an expert on Restricted Boltzmann Machines', said Simon Raper by way of example. There was general agreement that from a recruiting perspective, you want people who are more interested in problem solving than algorithm development although a solid background in probability with strong quantitative fundamentals is important so you can understand how different techniques work, what assumptions are made and where the 'gotchas' lie.

Another theme that came out was around the make-up of a good team, whether specifically in data science or more broadly across data in general. The team needs a variety of skills ranging from business and process understanding to strong statistical methods, to strong production standard coding (the classic Venn diagram) but although individuals should be encouraged to gain skills in all areas, it is the team that becomes the unicorn, rather than in the individual. The classic 'T-shape' profile works well - with general capabilities across a broad range of areas combined with deeper knowledge in one or two.

Another area of discussion was self-sufficiency - data science/data teams need to be self sufficient with dependencies on tech resources minimised. It is critical to gain agreement from the technology function about who is able to do what and instilling the requisite skills and processes within the team, so that a model doesn’t need to be re-written to go into production. The increasing prevalence of server-less services in AWS and GCP make this self sufficiency much more realistic and data science teams in general much more productive.

This lead into a lively conversation about how to set up good data science projects. A key theme was to focus on the problem and be crystal clear with stakeholders on what the outcome of the project would produce and how it would be used, not about what methods would be utilised. Simon Raper characterised it elegantly as 'solving problems in business with maths' . 'Find something you can practically deliver to someone who can take advantage of it', said Adam Davison. Stakeholder management and delivering early and often, with feedback on iterations was a recurring theme. The comparison to the software development process was made- business stakeholders are now used to the concept of software being delivered in an agile and iterative way and we will hopefully see this approach becoming more acceptable and adopted for data science.

We ended with the provocative question, 'should all CEOs become chief data scientists?' - which was met with a resounding 'no' from the panel.

Audience Q&A

Discussion topics included:

  • To what extent should data science be responsible for production? There was a general feeling that data science teams should be able to own and manage productionised processes.
  • Role proliferation, sucha as research data scientist, product data scientist, machine learning engineer, etc. The general feeling was to be wary of overly specialised job titles, although a realisation that there may become some specialisation between automated decision making versus operations research and helping people make better decisions.
  • What is the best mix of skills for data science teams and what about management skills? There was general agreement that it depends on the scale of the organisation and the team; larger teams in larger more bureaucratic organisations could well benefit from data product/programme managers to help manage stakeholders and change. In general, you want people who 'can write production code; who are driven to build stuff - not coding up algorithms', commented Martin Goodson.
  • What is a data scientist and should there be a qualification? There are definitely core required skills but because the field and roles are still evolving it might be premature. However, the RSS DSS is keen to shape the discussion and its next event in July will be focused on this topic. From an education perspective, said Magnus Rattray, 'we do need some kind of guidelines over what the masters courses need to deliver'.
  • Where should ethics sit - should data scientists own ethics or should it be a separate role? There was consensus that the potential for doing bad things with data is high, and that data scientists should strive to have a high ethical and moral standards. Depending on the organisation, there may be specialist roles in compliance or risk departments that should be leveraged/included in the discussion.
  • What should be the interaction between data science and behavioural science? There was agreement on a huge overlap between the two.

Do let us know in the comments your thoughts on any of these topics, and also any questions you would like to see discussed in the future.

 

Load more