The promise of the super-computing industry has always been that pioneering bleeding-edge technology built for the hyperscale filters down to become mainstream so we all have exciting stories about how the phone in your pocket is as powerful as the world’s fastest machine from not-so-many years ago. Whilst this is (was?) true for hardware, there are very few scientists who’ll tell you that software ease of use or accessibility has moved in such leaps and bounds. Moreover, whilst collaboration with others around the globe brings a lot of benefits, other factors like data security and privacy that can frequently – and very suddenly – demand lots of attention. If we’re to keep our pace of scientific discovery going (or even accelerate it), our most pressing task in the computing community is to give working scientists access to the tools & techniques they need in a form that’s easy to use, secure for their data and scalable to the limits of their ideas – not limited by their laptop’s memory capacity. This means an inherently different approach to what has come before, not just business as usual with more bandwidth.
🎥 This talk was recorded on video and is available at https://doi.org/10.5446/42484.