Course Description: The lab sessions cover cloud application development and deployment, use of cloud storage, creation and configuration of virtual machines, and data analysis on the cloud using data mining tools.
Scope and Objective:
This course is designed to enable students to:
Be familiar with application development and deployment using open stack cloud platforms
Be able to deploy an open-source software framework such as Hadoop, to solve the in-demand Big Data issues with just a single click on the catalog.
Be able to work with and understand containers by spinning a docker equipped Virtual Machine.
Be able to use one platform to perform any kind of tasks or use cases related to the technology being used in recent times such as JAVA, Python, Web development, etc.
Upon successful completion of this course, the student will be able to:
CO1. Demonstrate the types of cloud computing architecture
CO2. Create and Run virtual machines on open source OS
CO3. Implement Infrastructure as a service using OpenStack
CO4. Implement storage as a service using OpenStack
Note: Students are advised to have the code ready before deployment.
Understand the basics of Cloud Computing and learn the several types of cloud computing architecture models such as IaaS, PaaS, and SaaS
Learn how to make complete use of Programming Languages such as JAVA, Python, PHP, etc. to build and deploy applications on any platform, engineer websites, and so on.
Learn how to create and use the familiar database engines such asPostgreSQL, MySQL, MariaDB, Oracle Database, and SQL Server
Deploy the in-demand machine learning and artificial intelligence models by taking not the conventional route. Achieve image classification, predict stock market prices, predict credit card fraud detection, analyze the sentiments through sentiment analysis, etc.
Transition into being a Linux user by learning how to configure servers of your choice. Learn the kinds of workloads for which the server will mainly be used:
Learn how to deploy multi-node clusters such as Hadoop and Kubernetes.
For Kubernetes perform tasks such as - Configure All Kubernetes Hosts, add the Package Repository, install required Packages, setup Hostname Resolution, and continue with the configurations of the multi nodes.
For Hadoop, include multi-node clusters to tackle huge amounts of data. As big data grows exponentially, parallel processing capabilities of a Hadoop cluster help in increasing the speed of the analysis process. However, the processing power of a Hadoop cluster might become inadequate with the increasing volume of data. In such scenarios, Hadoop clusters can scale out easily to keep up with the speed of analysis by adding extra cluster nodes without having to make modifications to the application logic.
Learn the basics of Virtualization, installation of hypervisors such as KVM, VMWare and learn how to create and manage Virtual machines. The tasks we can achieve through this are
To allow for creating virtual switches that connect virtual machines
Offer Identity, Authentication, and Role-based access for Users to Virtual Machines
Offer policy-based administration by putting Virtual Machines in logical groups and applying relevant policies.
Learn how to install and configure Identity Management which is used as an authentication/authorization layer for Openstack. The tasks we can achieve through this are
Mapping of policy target to API
Setting up keystone
Creating user and roles, and managing services
Understand the concept of SaaS, and learn how it is implemented using OpenStack which gives universal access to files through a web interface.
Here we create different virtual machines on PackStack (IAS), enable the storage option Swift and attach the Swift storage to the constructed Virtual machines.
Our goal is to arrive at scalable unified storage for the applications that are deployed and able to work with desired databases with a single click on the catalog.