Tag Archives: Machine Learning Studio

[Build 2019] Welcome to the world of Machine Learning with ML.NET 1.0


As i updated earlier, ML.NET is a free, cross-platform, and open source machine learning framework for .NET developers.

  • It is also an extensible platform that powers Microsoft services like Windows Hello, Bing Ads, PowerPoint Design Ideas, and more.
  • This session focuses on the release of ML.NET 1.0.
  • If you want to learn the basics about machine learning and how to develop and integrate custom machine learning models into your applications, this demo-rich session is made for you!
  • Using this, now you can reuse .net skills, proven at scale, automated ML/tools, extensibility (TensorFlow, ONNX, Infer.net), Open-source/cross-platform etc.

Useful Links

Hope this helps.

[Build 2019] Machine learning with ML.Net


As i updated you earlier, ML.NET is a free, cross-platform and open source machine learning framework designed to bring the power of machine learning (ML) into .NET applications.

As you aware that we’re in mid of Microsoft Build 2019 conference, where we joined by Cesar De La Torre Llorente who gives us a great overview of what the goals of ML.NET are, and shares with us some of the highlights of the 1.0 release.

In this session, you can understand the difference between Machine learning vs Artificial Intelligence, Machine learning models, why we created ML.Net, how the machine learning can help, how you can use the ML.Net CLI, visual studio extension etc.

Related links:

Hope this helps.

Machine Learning Developer: Build, test, deploy, consume Azure Machine Learning Web Services


Machine Learning web services provide an interface between an application and a Machine Learning workflow scoring model. An external application can use Azure Machine Learning to communicate with a Machine Learning workflow scoring model in real time.

How easy is that?
A call to a Machine Learning web service returns prediction results to an external application. To make a call to a web service, you pass an API key that was created when you deployed the web service. A Machine Learning web service is based on REST, a popular architecture choice for web programming projects.

Types:
Azure Machine Learning has two types of web services:

  • Request-Response Service (RRS): A low latency, highly scalable service that provides an interface to the stateless models created and deployed by using Machine Learning Studio.
  • Batch Execution Service (BES): An asynchronous service that scores a batch for data records.

Here’s a classic example how a RRS will work:

Consuming REST API & Access the Web Service:
There are several ways to consume the REST API and access the web service. For example, you can write an application in C#, R, or Python by using the sample code that’s generated for you when you deployed the web service.

Sample:
The sample code is available on:

  • The Consume page for the web service in the Azure Machine Learning Web Services portal
  • The API Help Page in the web service dashboard in Machine Learning Studio
  • You can also use the sample Microsoft Excel workbook that’s created for you and is available in the web service dashboard in Machine Learning Studio. For more information about Machine Learning Web services, see Deploy a Machine Learning Web service.

Build, test, Deploy solutions:
Azure Machine Learning enables you to build, test, and deploy predictive analytic solutions. From a high-level point-of-view, this is done in three steps:

  • Create a training experiment – Azure Machine Learning Studio is a collaborative visual development environment that you use to train and test a predictive analytics model using training data that you supply.
  • Convert it to a predictive experiment – Once your model has been trained with existing data and you’re ready to use it to score new data, you prepare and streamline your experiment for predictions.
  • Deploy it as a web service – You can deploy your predictive experiment as a new or classic Azure web service. Users can send data to your model and receive your model’s predictions.

Hope this helps.

Difference between Azure Machine Learning Studio vs. Workbench vs Azure Machine Learning Services?


When introduced Azure Machine Learning Workbench was a preview downloadable application. It provides a UI for many of the Azure Machine Learning CLI commands, particularly around experimentation submission for Python based jobs to DSVM or HDI. The Azure Machine Learning CLI is made up of many key functions, such as job submission, and creation of real time web services. The workbench installer provided a way to install everything required to participate in the preview.

Azure Machine Learning Studio is an older product, and provides a drag and drop interface for creating simply machine learning processes. It has limitations about the size of the data that can be handled (about 10gigs of processing). Learning and customer requests have based on this service have contributed to the design of the new Azure Machine Learning CLI mentioned above. BTW, now it should be added that Azure Machine Learning Workbench is deprecated since September 2018 and has been replaced by the Azure Machine Learning services (now GA).

Azure Machine Learning Service

The core functionality is still intact, but some major changes to point out about the architecture are:

  • A simplified Azure resources model
  • New portal UI to manage your experiments and compute targets
  • A new, more comprehensive Python SDK
  • A new expanded Azure CLI extension for machine learning

So in short, Azure Machine Learning service contains many advanced capabilities designed to simplify and accelerate the process of building, training, and deploying machine learning models. Automated machine learning enables data scientists of all skill levels to identify suitable algorithms and hyperparameters faster. Support for popular open-source frameworks such as PyTorch, TensorFlow, and scikit-learn allow data scientists to use the tools of their choice. DevOps capabilities for machine learning further improve productivity by enabling experiment tracking and management of models deployed in the cloud and on the edge. All these capabilities can be accessed from any Python environment running anywhere, including data scientists’ workstations.