Testing new research algorithms and evaluating massive data sets efficiently is a hyper-evolving field – the sheer enormity of the data to be moved efficiently in and out of a research environment and the careful handling of results for further comparison can slow a research team or department to a crawl.
Yet by applying the best practices of shared workflows, paired with the agility and programmability of virtual machine containers on incredibly fast NVMe storage is unlocking enormous performance gains to get research done far faster and more thoroughly than ever before.
Watch this on-demand, fast-moving discussion about the challenges facing data scientists face today – lessons learned, and best practices from delivering a high-performance workflow for a US Government scientific and research customer including:
- How to ‘Dockerize’ and containerize discrete solution algorithms for rapid testing
- How to match the shared storage performance of StorNext 7 and NVMe storage to the fastest possible workflow for large image and data sets
- How to archive the data and result sets from offsite to secure, encrypted cloud storage with StorNext FlexTier
The session ends with a slide listing the top takeaways and workflow building recommendations, and list of resources mentioned during the discussion.