Why the Supercomputer Sector May Bifurcate – Again
- So, if we decide here and now that system performance is a sum of performance of the system’s parts, maybe we can come up with a rule that says, based on what we understand about accelerators, what we understand about algorithms, the combined network performance and latency minimization, and the optimizations that we’re running in the background, we can predict for the next year a Gordon Moore Commemorative Performance Factor of “1.5” or “1.3” — some number that can give you a forecast.
- Dongarra: We get a lot less performance because of a number of things, but mainly because how the hardware interacts with the software is complicated.
- The Japanese researchers at the RIKEN Center for Computational Science said, “We’re going to build this machine, it’s going to be for scientific computing, and we’re going to try to address some of the weaknesses in the hardware and make them better.” They invested half-a-billion dollars [in Fugaku].
Qualcomm’s new Snapdragon 888 promises faster speeds, better cameras, and more powerful AI
- The new Spectra 580 ISP is the first from Qualcomm to feature a triple ISP, allowing it to do things like capture three simultaneous 4K HDR video streams or three 28-megapixel photos at once at up to 2.7 gigapixels per second (35 percent faster than last year).
- The Snapdragon 888 features Qualcomm’s sixth-generation AI Engine, which it promises will help improve everything from computational photography to gaming to voice assistant performance.
- The Snapdragon 888 also features the second-generation Qualcomm Sensing Hub, a dedicated low-power AI processor for smaller hardware-based tasks, like identifying when you raise your phone to light up the display.
- The new second-gen Sensing Hub is dramatically improved, which means the phone will be able to rely less on the main Hexagon processor for those tasks.
Amazon S3 | Strong Consistency | Amazon Web Services
- Amazon S3 delivers strong read-after-write consistency automatically for all applications, without changes to performance or availability, without sacrificing regional isolation for applications, and at no additional cost.
- Strong read-after-write consistency and strong consistency for list operations is automatic, and you no longer need to use workarounds, or make changes to your applications.
- S3 also provides strong consistency for list operations, so after a write, you can immediately perform a listing of the objects in a bucket with any changes reflected.
- Amazon S3 supports parallel requests, which means you can scale your S3 performance by the factor of your compute cluster, without making any customizations to your application.
Applying the MLOps Lifecycle
- There are different deployment patterns for MLOps but let’s assume we have a real-time serving use case.
- When the data is not well-structured or predictable then the MLOps lifecycle can look very different to mainstream DevOps. Then we see a range of approaches come in that are specific to MLOps. Let’s go through each of the phases in turn and the approaches that come into play.
- When we’ve selected a new model then we need to work out how to get it running.
- Deployment tools such Seldon therefore not only support deployment-phase features but also have integrations for the MLOps needs of the monitoring phase.
- For platform-level tools to offer insights about data going through models, it is important to know what type of data is in play.
- We’re also better placed to ask the right questions in order to scope and approach MLOps projects.
BlackBerry shares rocket upwards on AWS deal to integrate sensor data in vehicles
- BlackBerry shares shot up in early trading on news that the company will partner with Amazon Web Services to jointly develop and market its vehicle data integration and monitoring platform, IVY.
- The former undisputed heavyweight of the smartphone market, BlackBerry has transformed itself into a provider of business security and information integration services and it’s through this transformation that the company attracted the attention of Amazon’s web services business.
- The newest iteration of connected car services from the Waterloo, Canada-based company allows automakers to read vehicle sensor data coming off of equipment from multiple vendors, normalize that data and provide insights around the data for use either remotely or in vehicles.
- The BlackBerry toolkit can also make it easier for automakers to collaborate with a wider pool of developers to create new services around vehicle performance optimization, reduce maintenance costs and perform remote software updates on vehicles.
Why You Should Become a Data Scientist in a Tier One Consulting Firm
- At this point, you may consider multiple options such as being a data scientist in a corporate company, startup, or consulting firm.
- Data scientists are in high demand and it is worth exploring your options to maximize the probability of making the right choice.
- After a coffee chat at the office of my current workplace, I was sure that working as a data scientist in a Tier One consultancy was the right fit for me.
- The discussion of skills required to become a great data scientist has for long been centered around hard skills like programming and deployment skills.
- Let’s not forget about the soft skills needed to succeed as a data scientist.
- Has this changed your view on becoming a data scientist in a Tier One consulting firm?
Why is Apple's M1 chip so fast?
- The benefit of this is that specialized chips tend to be able to perform their tasks significantly faster using much less electric current than a general purpose CPU core.
- For many years already specialized chips such as the graphical processing units (GPUs) have been sitting in Nvidia and AMD graphics cards performing operations related to graphics much faster than general purpose CPUs. This is part of the reason why a lot of people working on images and video editing with the M1 Macs are seeing such speed improvements.
- AMD has also started putting stronger GPUs on some of their chips and moving gradually towards some form of SoC with the accelerated processing units (APU) which are basically CPU cores and GPU cores placed on the same silicon die.
Data Anonymization with Autoencoders
- The latent representation of the data extracted by this method can be used in downstream machine learning predictive tasks by preserving the secrecy of the original data and without an important performance drop.
- When we train the neural network, the difference between the input and the output are computed to backpropagate the loss and update the weights, while during the predictive phase, we use only the weights of the encoder part as we need only the latent representation.
- In this tutorial, we have seen how to apply an autoencoder to anonymize a dataset in order to pass the encoded data to downstream machine learning tasks.
- A well-trained autoencoder preserves the predictive power of the original data, however, once the features are encoded, it is not possible to perform exploratory data analysis (for instance, joining two datasets).