I had the chance to speak with many data scientists and machine learning researchers in other companies in the last few years. This is a frequently repeating motive I saw: data scientists are expected to develop algorithms to be deployed to customers in production within a few months. To achieve this goal, many of them focus on reading many academic papers in the field of deep learning in order to implement these algorithms. I asked these experts if they also considered other tasks, for example — improving the data annotation GUI (graphical-user-interface) so that it will be much easier to use for the annotators, and they will be able to provide x10 more data in the same amount of time. The reason I asked this question was because these tasks worked very well for our team at certain times. Most of the experts agreed with me that it is actually possible and, in many points in time, much more promising in the defined timeline, than reading and implementing more papers, which is a time consuming activity and succeeds to improve the performance in a relatively small fraction of the cases.
Next I asked the experts, why they chose to read and implement papers over the other more promising alternatives (such as improving the GUI to accelerate the annotation x10). Their answer was as follows — What do you mean? We’re researchers— We’re not software engineers. It’s not our job to:
Many of the experts went on and told me: ‘I completely understand there’s no one else developing that GUI, and that my role definition and capability set is the closest to being able to perform it in the relevant timeframe. I really want the product to succeed, but I just didn’t come here to do this, I came here to do research. I’ll do everything possible to make the product succeed, as long as it’s research.’ To create a successful product in A.I. — you have to do research. And you have to do science. But as for 25–75% of the time, you have to do many other things — without which you just can’t ship a product that users easily use, love, that has proven its value, that passes regulatory approval, etc.
True ownership is feeling end-to-end responsibility for a task. Never saying — I came here to do X, so my responsibility starts here and ends here. Ownership is — doing whatever it takes for the task to succeed. If there’s someone with a better fit for a specific sub-task that’s also available, s/he should do it, but otherwise — I’ll do it happily. Ownership is a core value in our team and we specifically look for it when we recruit. We look for people who can’t see themselves thriving without a full sense of ownership. People who wake up smiling in the morning because they’re going to do whatever it takes to win, and it’s really thanks to them that we will win. We’ll reject the brightest and most experienced Deep Learning experts on this value. And before we do that — we’ll make every effort to put our set of values and expectations as transparently as possible on the table. I see it as a success if someone, as genius as she may be, rejects us because he didn’t connect with our values.
Over time we saw the 25% of non-research work grew to 80% and then we all agreed things are unbalanced and decided to create a new role — A.I. Software Engineers. These are expert software engineers who’s main definition of responsibility is the software engineering of the company’s A.I. components (both in research and production) and responsibility for their scalability. These engineers are literally writing the book of design patterns for A.I. software engineering, but that’s a topic for a different post.
In the first years of our company, we saw without any doubt that from a business perspective the software engineering of the A.I. components were at least as important as the algorithmic work. Once the algorithms pass a certain threshold of maturity, customers love your product and you want to focus on scaling things up. Since we want to excel at both simultaneously we understood we needed to bring people who see either engineering or research as the main goal of their career. Even the smartest deep learning expert has a limit to the amount she could learn — even with the best intentions it’s very hard to strive to excel at both deep learning algorithms and software engineering. The second reason we decided to hire A.I. software engineers is that it was clear we were heading towards 99% software engineering work, and it was clear that every person who came to be an algorithm engineer won’t be happy over time if he will not do enough deep learning and research. So of course there is a balance, we don’t want to take advantage of our people’s ownership. I don’t look at that as a lack of ownership.
Looking back at the first years of our company, these things really depend on the period of time in the company’s life. Though it is our role as managers to plan towards the future, it’s very hard to predict accurately the personnel needs towards the future. Even now when we hire A.I. software engineers there will be weeks when the algorithm engineers will be required to do 100% software engineering, and there will be months when the A.I. software engineers will be required to work on the algorithms.Just recently one of the algorithm engineers in our team, spent a month developing a big-data pipeline that enabled us to perform medical research on hundreds-of-thousands of CT scans from certain hospitals. This research enabled us to bring very impressive clinical evidence to the medical value that our product gives. This job had nothing to do with research of new deep learning algorithms, but its products were so convincing that it had everything to do with surpassing our sales goals for 2018 by a huge margin.
And for all the people who think the success of an A.I. startup is a Cinderella story of a genius Phd who thinks about an algorithm that is 10x better than everyone else’s, it’s only a very small part of the success. Even with the best algorithms I’m pretty sure you will rarely succeed, and you will rarely deliver, and you rarely sell, and you will not scale-up fast enough if you don’t find a million small and genius tricks that make you move faster and smarter than everyone else. My personal belief is that those companies who do whatever it takes to succeed, will also survive long enough to build algorithms that are not 10 but a 100 times better. It’s a long term game.
To summarize, I know and hope there are already many teams like ours out there. But I also know that many aren’t. There are probably cases in which it’s better not to be like us, but if you feel it’s not the case for you I hope you connected to this post. If you’re a team leader, my advice to you is to try nurturing this type of culture and looking for this kind of person, that does whatever it takes for the task to succeed. People that say about non-algorithmic tasks: “If there’s someone better fit for a specific task that’s also available to do, probably s/he should do it, but otherwise, I’ll do it happily ” Don’t let the loud minority trying to set the tone for the entire A.I. community fool you. It’s more than possible to find people who will be the best A.I. researchers and prefer to work in that type of culture. It’s more than possible to find people who will be the best A.I. researchers and will be very passionate about software engineering, about writing tests, about running tests and about optimizing processes for regulatory approvals.
As always I would love to hear your thoughts about this in the comments, whether you agree or disagree.
Idan Bassuk is the VP of A.I. at Aidoc. This post was originally posted on Medium.com.
Aidoc experts, customers and industry leaders share the latest in AI benefits and adoption.
Explore how clinical AI can transform your health system with insights rooted in real-world experiences.
Learn how to go beyond the algorithm to develop a scalable AI strategy and implementation plan.
Explore how Aidoc can help increase hospital efficiency, improve outcomes and demonstrate ROI.