Vmware Administrator/VMware Interview Question 2020

  1. What is cloud networking?

Cloud networking is a type of infrastructure where network capabilities and resources are available on demand through a third-party service provider that hosts them on a cloud platform. The network resources can include virtual routers, firewalls, and bandwidth and network management software, with other tools and functions becoming available as required. Companies can either use cloud networking resources to manage an in-house network or use the resources completely in the cloud.
 

  1. What is Application Virtualization?

Application virtualization is a process that deceives a standard app into believing that it interfaces directly with an operating system’s capacities when, in fact, it does not. 
 
This ruse requires a virtualization layer inserted between the app and the OS. This layer, or framework, must run an app’s subsets virtually and without impacting the subjacent OS. The virtualization layer replaces a portion of the runtime environment typically supplied by the OS, transparently diverting files and registry log changes to a single executable file.
 
By diverting the app’s processes into one file instead of many dispersed across the OS, the app easily operates on a different device, and formerly incompatible apps can now run adjacently. 
 
Used in conjunction with application virtualization is desktop virtualization—the abstraction of the physical desktop environment and its related app software from the end-user device that accesses it.
 

  1. What is a Hypervisor? 

hypervisor, also known as a virtual machine monitor, is a process that creates and runs virtual machines (VMs). A hypervisor allows one host computer to support multiple guest VMs by virtually sharing its resources, like memory and processing. 
 
Generally, there are two types of hypervisors. Type 1 hypervisors, called “bare metal,” run directly on the host’s hardware. Type 2 hypervisors, called “hosted,” run as a software layer on an operating system, like other computer programs.
 
 

  1. What is a bare metal hypervisor?

hypervisor, also known as a virtual machine monitor or VMM, is a type of virtualization software that supports the creation and management of virtual machines (VMs) by separating a computer’s software from its hardware. Hypervisors translate requests between the physical and virtual resources, making virtualization possible. When a hypervisor is installed directly on the hardware of a physical machine, between the hardware and the operating system (OS), it is called a bare metal hypervisor. Some bare metal hypervisors are embedded into the firmware at the same level as the motherboard basic input/output system (BIOS). This is necessary for some systems to enable the operating system on a computer to access and use virtualization software.
 
Because the bare metal hypervisor separates the OS from the underlying hardware, the software no longer relies on or is limited to specific hardware devices or drivers.  This means bare metal hypervisors allow operating systems and their associated applications to run on a variety of types of hardware. They also allow multiple operating systems and virtual machines (guest machines) to reside on the same physical server (host machine). Because the virtual machines are independent of the physical machine, they can move from machine to machine or platform to platform, shifting workloads and allocating networking, memory, storage, and processing resources across multiple servers according to needs. For example, when an application needs more processing power, it can seamlessly access additional machines through the virtualization software. This results in greater cost and energy efficiency and better performance, using fewer physical machines. 
 

  1. What is the difference between bare metal hypervisors and hosted hypervisors?

The bare metal hypervisor is the most commonly deployed type of hypervisor. This is where the virtualization software is installed directly on the hardware, where the operating system is normally installed. Bare metal hypervisors are extremely secure since they are isolated from the attack-prone operating system. They perform better and more efficiently than hosted hypervisors, and most companies choose bare metal hypervisors for enterprise and data center computing needs. 
 
There is another type of hypervisor, known as a client or hosted hypervisor. While bare metal hypervisors run directly on the computing hardware, hosted hypervisors run within the operating system of the host machine. Although hosted hypervisors run within the OS, additional operating systems can be installed on top of it. Hosted hypervisors have higher latency than bare metal hypervisors because requests between the hardware and the hypervisor must pass through the extra layer of the OS. Hosted hypervisors are also known as client hypervisors because they are most often used with end users and software testing, where the higher latency is not as much of a concern. 
 
Hardware acceleration technology can boost processing speed for both bare metal and hosted hypervisors by doing some of the resource-intensive work of creating and managing virtual resources. A virtual Dedicated Graphics Accelerator (vDGA) is a type of hardware accelerator that can take care of sending and refreshing high-end 3-D graphics, freeing up the main system for other tasks and greatly increasing the speed at which images are displayed. This technology is very useful for industries such as oil and gas exploration, where companies need to quickly visualize complex data.
 

  1. What is cloud automation?

Cloud automation enables IT admins and Cloud admins to automate manual processes and speed up the delivery of infrastructure resources on a self-service basis, according to user or business demand. Cloud automation can also be used in the software development lifecycle for code testing, network diagnostics, data security, software-defined networking (SDN), or version control in DevOps teams.
 
Cloud automation of web servers for SaaS/PaaS applications can be scripted with command line utilities like Puppet, Jenkins, Git, or TravisCI. The ability to script web server deployments using full-stack disk image files is a key aspect of contemporary DevOps best practices using Docker containers with Kubernetes, CoreOS, Mesosphere, or Docker Swarm for elastic orchestration.
 
Cloud automation can also be implemented to support corporate WAN, VLAN, and SD-WAN deployments using software from VMware, Microsoft, or open source Linux developers.
 

  1. Advantages of cloud automation. 
  2. Reduction of error-prone processes: Cloud automation helps reduce error-prone manual processes and deliver infrastructure resources faster. Cloud automation must support multiple hypervisors and virtualization standards across hardware resources such as KVM, XenServer, Hyper-V, Docker, and Kubernetes, as well as the software development lifecycle for programming teams.
  3. Cost savings: Cloud automation saves businesses money by dynamically reducing the time it takes to provision infrastructure resources, eliminating errors and removing bottlenecks. It also saves money by optimizing workload placement, so that you use the least expensive hardware, and prioritize hardware for important projects. It also increases control through policies. Use of public cloud billing structures is designed to reflect approximately 40% savings over in-house data centers or private cloud installations.

Cloud automation is used in DevOps teams for software testing in CI/CD requirements, allowing them to roll out new features and security patches to production more quickly in Agile project management.
 

  1. Types of cloud automation

There are two types of cloud automation. The first is support for corporate data center operations. The second is hosting for websites and mobile applications at scale. Public cloud hardware from AWS, Google Cloud, and Microsoft Azure can be used for either purpose. Code Stream, Cloud Assembly, and Service Broker all plugin to the VMware vCloud platform for DevOps and software development teams.
 
In the first type of Cloud Automation, IT administrators seek to leverage the same benefits of public cloud such as self-service, policies, faster provisioning, and automated operations in the corporate environment, in their on-premises private cloud or in the hybrid cloud. In the second type, Cloud automation improves network traffic speeds through SDN and load balancing utilities, while also serving web and mobile applications to millions of page hits per day.
 
Web and mobile applications run on a wide variety of microformats. Each app requires a dedicated isolated run-time environment that can scale elastically with user traffic. AWS EC2 and Kubernetes are the most popular solutions for maintaining elastic web server solutions for microservice-driven software applications. Elastic web server platforms implement database replication and synchronization with load balancing on network traffic requests that integrates automated anti-virus scanning in production.
 

  1. Cloud automation vs. cloud orchestration  

Cloud automation using software like the vRealize Suite from VMware automates the provisioning and ongoing management and operations of private or hybrid clouds, including code testing, web server configuration, version control, and data center administration. Cloud orchestration platforms like Kubernetes, Docker Swarm, Mesosphere, or CoreOS Tectonic all implement elastic web server support at enterprise scale for cloud hosting only.
 
In this rapidly changing market sector, many cloud web server orchestration platforms are being extended to other data center use requirements or specialized applications for the telecommunications industry and large manufacturing firms. Enterprise IT departments need to leverage this innovation for productivity gains and cost-efficiency improvements across data center operations.
 

10. What is Converged Infrastructure?

Converged infrastructure is a pre-packaged bundle of systems, including servers, storage, networking, and management software. Companies usually purchase these systems from one company, instead of buying the hardware and software components separately from different suppliers. Converged infrastructure systems are typically pre-configured and pre-tested, making them easier and faster to deploy when building out a data center.  
 

  1. Benefits and drawbacks of Converged Infrastructure

A converged infrastructure is typically comprised of components from a single vendor. This has the following advantages:

  1. Greater compatibility: Helps minimize or even eliminate hardware and software compatibility issues.
  2. Cost savings: Converged infrastructure saves money on data center provisioning, deployment, and management. Although currently many converged systems still require separate management tools for compute, networking and storage even if they come from the same manufacturer, IT teams are starting to configure and manage all the system resources through a single management interface. 
  3. Simplification: Streamlines data center management because it eliminates the need for IT to have expertise in products from multiple vendors.


Potential downsides to converged infrastructure include:

  1. Vendor reliance: Converged infrastructure locks a company into a single vendor, which can result in fewer features and functionality, and limited customization options. 
  2. Increased complexity: Adding components to a converged architecture after installation can be complicated and expensive.
  1. Converged Infrastructure vs. Hyperconverged Infrastructure

Where converged infrastructure is hardware-based and specializes in packaged hardware and software from a single vendor, hyperconverged infrastructure collapses compute and storage resources into highly virtualized, industry-standard x86 servers, with unified management.
 
Converged infrastructure can take advantage of existing hardware. With a converged architecture, servers, storage, networking, and management remain independent of each other. That way, individual components can be used for specific and separate purposes, plus servers and storage can be scaled independently. 
 
Hyperconverged infrastructure (HCI) involves more abstraction and offers more flexibility than converged infrastructure, since its components are software-defined. An HCI system’s virtualized compute and storage resources also make it easier and faster to scale compared to converged infrastructure. Hyperconverged infrastructure allows you to easily scale up by adding or replacing drives in existing servers, or scale out by adding nodes to a cluster. This usually means that even if you only need additional storage, the new node will also come with compute. Nodes can be added one at a time for incremental growth, whereas storage arrays often require a new controller or shelf of drives, which is a much larger single spend. 
 
Both converged and hyperconverged systems can lower the data center footprint, simplify management, and increase infrastructure efficiency. Companies usually choose one versus the other based on variables like the size of the environment, desired amount of control over the environment, cost, existing infrastructure, and future IT goals and vision.
 

  1. How to Deploy Converged Infrastructure

Converged infrastructure can be deployed and hosted on-premises, with system administration on a single web server. Converged infrastructure simplifies the traditional deployment process by offering pre-configured settings on hardware devices. It sometimes includes firewall appliances with their own proprietary network management software.
 
There are pre-configured converged infrastructure solutions designed for different use cases, including data centers, remote desktops, and web servers. These sector-specific packages typically include routers, cables, and networking equipment to run a corporate WAN at scale.  
 

  1. Converged Infrastructure vs. Public Cloud Solutions

With a converged infrastructure solution, companies buy packaged hardware and software solutions from a vendor or consultancy. In a cloud model, companies do not purchase hardware—instead, they purchase based on a subscription-based or consumption-based pricing model for Infrastructure-as-a-Service (IaaS), which runs on the cloud service provider’s hardware. Businesses can either pay up front (subscription-based), or for what they use (consumption-based), with the potential to scale up or down according to demand. Cloud solutions can be provisioned with software or contracted under a managed approach.
 

  1. What is Converged Architecture?

Converged architecture refers to the components, design, and configuration of a converged infrastructure. The term “converged architecture” is sometimes used interchangeably with “converged infrastructure.”
 

  1. What is a Hybrid Cloud?

Hybrid cloud describes the use of both private and public cloud platforms, working in conjunction. It can refer to any combination of cloud solutions that work together on-premises and off-site to provide cloud computing services to a company. A hybrid cloud environment allows organizations to benefit from the advantages of both types of cloud platforms, and choose which cloud to use based on specific data needs. 
 
Cloud service providers may provide both public and private cloud options in a hybrid cloud offering, and the private cloud may be hosted on or off premises. Or, an organization might host its own private cloud on site, and also use off-site public cloud services for different data requirements or during spikes in demand. There are many different options possible, but tight integration between the private and public clouds is always critical to any successful hybrid cloud environment.
 
 

  1. Why Hybrid Cloud?

Companies use hybrid cloud to quickly and cost-effectively enhance their existing resources. They can keep sensitive data secure within a private cloud and also quickly add more computing, network bandwidth, or storage in a third-party public cloud to address temporary surges in demand.
 
Hybrid cloud can also refer to a single solution that incorporates multiple cloud platforms. In this case, there is a single management system to access and operate all of the cloud computing elements. While hybrid cloud describes the use of public cloud service or services in conjunction with an on-premises private cloud, multi-cloud refers to the use of multiple public cloud service providers. A hybrid cloud environment leverages both the private and public clouds to operate. A multi-cloud environment includes two or more public cloud vendors which provide cloud-based services to a business that may or may not have a private cloud. A hybrid cloud environment might also be a multi-cloud environment.
 

  1. What is hybrid cloud infrastructure? 

The infrastructure that supports hybrid cloud typically includes a network, servers, and virtualization software. The servers host the data and display it remotely via the network. The virtualization software allows virtual resources, like desktops, to be displayed remotely. Because a hybrid cloud implies a combination of both in-house and third-party resources, these back-end components sit in two locations—on premises in the enterprise data center and with the third-party public cloud service provider. Because the cloud service provider supports multiple tenants with varying demands, they use powerful, high-density systems to host their cloud computing services. Virtualization software allows cloud service providers to host multiple operating systems on one server, maximizing their resources.
 

  1. What is Kubernetes? 

Kubernetes is an open-source container orchestration platform that enables the operation of an elastic web server framework for cloud applications. Kubernetes can support data center outsourcing to public cloud service providers or can be used for web hosting at scale. Website and mobile applications with complex custom code can deploy using Kubernetes on commodity hardware to lower the costs on web server provisioning with public cloud hosts and to optimize software development processes.
 
 

  1. Kubernetes advantages

The main advantage of Kubernetes is the ability to operate an automated, elastic web server platform in production without the vendor lock-in to AWS with the EC2 service. Kubernetes runs on most public cloud hosting services and all of the major companies offer competitive pricing. Kubernetes enables the complete outsourcing of a corporate data center. Kubernetes can also be used to scale web and mobile applications in production to the highest levels of web traffic. Kubernetes allows any company to operate its software code at the same level of scalability as the largest companies in the world on competitive data center pricing for hardware resources.
 

  1. What is container orchestration?

Container orchestration is the management of individual web servers operating in containers through virtual partitions on data center hardware. Container orchestration is a means of maintaining the elastic framework for web servers in a data center on an automated basis in production. Administrators can establish resources that can be automatically started if web traffic increases over the capacity of a single server. For SaaS applications, this can scale to support millions of simultaneous users. 
 

  1. Kubernetes vs. Docker

Kubernetes is an open-source container orchestration platform. Docker is the main container virtualization standard used with Kubernetes. Other elastic web server orchestration systems are Docker Swarm, CoreOS Tectonic, and Mesosphere. Intel also has a competing container standard with Kata, and there are several Linux container versions. Docker has the largest share of the container virtualization marketplace for software products. Docker is a software development company that specializes in container virtualization, whereas Kubernetes is an open-source project supported by a community of coders that includes professional programmers from all of the major IT companies.
 

  1. What is multi-cloud? 

Multi-cloud is a term for the use of more than one public cloud service provider for virtual data storage or computing power resources, in addition to other private cloud and on-premises infrastructure. A multi-cloud strategy not only provides more flexibility in which cloud services an enterprise chooses to use, opening up options for hybrid cloud solutions, it also reduces dependence on just one vendor.
 
Cloud service providers host three types of services: Infrastructure as a service (IaaS), Software as a service (SaaS) and platform as a service (PaaS). With IaaS, the cloud provider hosts servers, storage, and networking hardware with accompanying services including backup, security, and load balancing. PaaS adds operating systems and middleware to their IaaS offering, and SaaS includes applications so that nothing is hosted on a customer’s site. Cloud providers may also offer these services independently.
 
There are many possibilities when it comes to multi-cloud options and combinations. A company’s multi-cloud could include the use of multiple IaaS providers for different workloads and a public PaaS to test new cloud applications. Each company’s multi-cloud infrastructure will be different, depending on its needs and limitations.
 

  1. What is a private cloud?

Private cloud is an on-demand cloud deployment model where cloud computing services and infrastructure are hosted privately, often within a company’s own intranet or data center using proprietary resources and are not shared with other organizations. The company usually oversees the management, maintenance, and operation of the private cloud. A private cloud offers an enterprise more control and better security than a public cloud, but managing it requires a higher level of IT expertise.
 
In general, cloud computing allows organizations to move compute power, data storage, and other services away from on-premises servers and onto remote servers that employees or customers can access via the Internet. A company that wishes to use cloud computing services may choose between a private cloud (where cloud services are exclusive to the company) and a public cloud (where cloud services are owned and managed by a provider who also hosts other tenants), or a combination of the two, known as a hybrid cloud.
 
 

  1. What is public cloud?

A public cloud is a cloud deployment model where on-demand computing services and infrastructure from a third-party provider are shared across multiple organizations using the public Internet. The cloud service provider owns the physical infrastructure and takes care of maintaining and managing it. Public cloud often includes resources like virtual machines, applications, networking, and storage. Companies can choose between a public cloud and a private cloud (where data is stored within the company’s own data center), or a combination of the two, known as a hybrid cloud.
 

  1. What is SAN and how does it work? 

A storage area network (SAN) is a dedicated, independent high-speed network that interconnects and delivers shared pools of storage devices to multiple servers. Each server can access shared storage as if it were a drive directly attached to the server. A SAN is typically assembled with cabling, host bus adapters, and SAN switches attached to storage arrays and servers. Each switch and storage system on the SAN must be interconnected.
 

  1. SAN Definition 

A SAN (storage area network) is a network of storage devices that can be accessed by multiple servers or computers, providing a shared pool of storage space. Each computer on the network can access storage on the SAN as though they were local disks connected directly to the computer.
 

  1. SAN vs. NAS 

A SAN and network-attached storage (NAS) are two different types of shared networked storage solutions. While a SAN is a local network composed of multiple devices, NAS is a single storage device that connects to a local area network (LAN). 
 

  1. What is a SAN switch? 

A SAN switch is hardware that connects servers to shared pools of storage devices. It is dedicated to moving storage traffic in a SAN.
 
 

  1. What is Server Virtualization?

Server virtualization is used to mask server resources from server users. This can include the number and identity of operating systems, processors, and individual physical servers.
 

  1. Server Virtualization Definition

Server virtualization is the process of dividing a physical server into multiple unique and isolated virtual servers by means of a software application. Each virtual server can run its own operating systems independently.
 

  1. Key Benefits of Server Virtualization:
  2. Higher server ability
  3. Cheaper operating costs
  4. Eliminate server complexity
  5. Increased application performance
  6. Deploy workload quicker

32. Three Kinds of Server Virtualization:

Full Virtualization: Full virtualization uses a hypervisor, a type of software that directly communicates with a physical server’s disk space and CPU. The hypervisor monitors the physical server’s resources and keeps each virtual server independent and unaware of the other virtual servers. It also relays resources from the physical server to the correct virtual server as it runs applications. The biggest limitation of using full virtualization is that a hypervisor has its own processing needs. This can slow down applications and impact server performance.

Para-Virtualization: Unlike full virtualization, para-virtualization involves the entire network working together as a cohesive unit. Since each operating system on the virtual servers is aware of one another in para-virtualization, the hypervisor does not need to use as much processing power to manage the operating systems.

OS-Level Virtualization: Unlike full and para-virtualization, OS-level visualization does not use a hypervisor. Instead, the virtualization capability, which is part of the physical server operating system, performs all the tasks of a hypervisor. However, all the virtual servers must run that same operating system in this server virtualization method.

  1. What is software load balancing?

Software load balancing is how administrators route network traffic to different servers. Load balancers evaluate client requests by examining application-level characteristics (the IP address, the HTTP header, and the contents of the request). The load balancer then looks at the servers and determines which server to send the request to.

  1. How Does VMware HA Work? 


VMware HA continuously monitors all servers in a resource pool and detects server failures. An agent placed on each server main- tains a “heartbeat” with the other servers in the resource pool and a loss of “heartbeat” initiates the restart process of all affected virtual machines on other servers. VMware HA ensures that sufficient resources are available in the resource pool at all times to be able to restart virtual machines on different physical servers in the event of server failure. Restart of virtual machines is made possible by
the Virtual Machine File System (VMFS) clustered file system which gives multiple ESX Server instances read-write access to the same virtual machine files, concurrently. VMware HA is easily configured for a resource pool through VirtualCenter.
 

  1. What is a Virtual Machine?

virtual machine, known as a guest, is created within a computing environment, called a host. Multiple virtual machines can exist in one host at one time. Key files that make up a virtual machine include a log file, NVRAM setting file, virtual disk file, and configuration file.
 

  1. Virtual Machine Definition

Virtual machines are software computers that provide the same functionality as physical computers. Like physical computers, they run applications and an operating system. However, virtual machines are computer files that run on a physical computer and behave like a physical computer. In other words, virtual machines behave as separate computer systems.
 

The Two Types of Virtual Machines:

  1. Process virtual machines: Execute computer programs in a platform-independent environment. It masks the information of the underlying hardware or operating system. This allows the program to be executed in the same fashion on any platform.
  2. System virtual machines: Support the sharing of a host computer’s physical resources between multiple virtual machines.  
     37. What is virtual networking?

Virtual networking enables communication between multiple computers, virtual machines (VMs), virtual servers, or other devices across different office and data center locations. While physical networking connects computers through cabling and other hardware, virtual networking extends these capabilities by using software management to connect computers and servers over the Internet. It uses virtualized versions of traditional network tools, like switches and network adapters, allowing for more efficient routing and easier network configuration changes.
 
Virtual networking enables devices across many locations to function with the same capabilities as a traditional physical network. This allows for data centers to stretch across different physical locations, and gives network administrators new and more efficient options, like the ability to easily modify the network as needs change, without having to switch out or buy more hardware; greater flexibility in provisioning the network to specific needs and applications; and the capacity to move workloads across the network infrastructure without compromising service, security, and availability.
 
 

  1. vSphere and vCenter Server

VMware vSphere is a suite of virtualization applications that includes ESXi and vCenter Server.
vSphere uses virtualization to do the following tasks:

•Run multiple operating systems on a single physical machine simultaneously.
•Reclaim idle resources and balance workloads across multiple physical machines.
•Work around hardware failures and scheduled maintenance.

•Familiarity with the components that make up a vSphere environment helps you understand the setup process and, ultimately, the process of using VMware vCenter Server to manage hosts and run virtual machines.
vSphere includes the following components in addition to the ESXi host and vSphere Client that you have already set up:
The vSphere Web ClientThe vSphere Web Client is the interface to vCenter Server and multi-host environments. It also provides console access to virtual machines. The vSphere Web Client lets you perform all administrative tasks by using an in-browser interface.
VMware vCenter ServervCenter Server unifies resources from individual hosts so that those resources can be shared among virtual machines in the entire datacenter. It accomplishes this by managing the assignment of virtual machines to the hosts and the assignment of resources to the virtual machines within a given host based on the policies that the system administrator sets.
vCenter Server allows the use of advanced vSphere features such as vSphere Distributed Resource Scheduler (DRS), vSphere High Availability (HA), vSphere vMotion, and vSphere Storage vMotion.
DatacenterA datacenter is a structure under which you add hosts and their associated virtual machines to the inventory.
HostA host is a computer that uses ESXi virtualization software to run virtual machines. Hosts provide CPU and memory resources, access to storage, and network connectivity for virtual machines that reside on them.
Virtual MachineA virtual machine is a software computer that, like a physical computer, runs an operating system and applications. Multiple virtual machines can run on the same host at the same time. Virtual machines that vCenter Server manages can also run on a cluster of hosts.
Via Vmware.com

Leave a Reply

Your email address will not be published. Required fields are marked *