Every user in Cloudbreak has their own Workspace - each workspace needs credentials for a cloud platform in order to provision clusters. Upon login we're in our default workspace in our cluster which has no credentials or resources attached.
Let's switch to the shared workshop workspace.
CloudBreak supports multiple Workspaces that can be shared amongst users and groups. Workspaces are a great way to logically separate different credentials, environments, resources, teams, and more.
Upon login, each user is placed into their default workspace - in this case every user has a workspace named after themselves.
For this workshop there is a shared workspace with a single large cluster that all the students will use. The name will change based on what the workshop location, title, or theme is.
In the upper right hand corner, click on the User/Workspace Dropdown and select the "workshop" Workspace.
Now that we are in the proper workspace, we can see a cluster available and we no longer receive the "No Credentials" error.
This is because the shared workspace has credentials and resources pre-provisioned by an administrator so as Data Scientists we don't have to worry about clouds, keys, or resources.
Select the available cluster.
Once you've selected the workshop cluster, you can see an overview of the various components, configurations, resources, and more.
Since we're not worried about the operation of the cluster, we can jump directly into this cluster to continue.
Click on the "Ambari URL" link to jump into Ambari
Ambari is one of the management components of a Hortonworks cluster. Ambari allows you to add, control, reconfigure, and otherwise manage the components of your cluster.
Here we can see the Dashboard which shows an overview of some important metrics about our cluster.
Ambari features streaming actions and notifications, allows you to interact with your HDFS store, and features self-healing capabilities.
On the left hand side you can see a listing of Services such as HDFS, YARN, and so on.
These same Services are also available as a dropdown menu in the top menu bar under Services.
Select the Zeppelin Notebook service and from the Quick Links menu select the Zeppelin UI option to load Zeppelin.
Apache Zeppelin is a component of the Hortonworks cluster that allows you to run Data-driven workloads.
Zeppelin's central draw is the use of Notebooks which is composed of Paragraphs.
These Paragraphs can have different Interpreters such as Markdown, Python, SQL, Spark, and more.
Because these Notebooks are text-based, they can be checked into a version control system such as GitHub and easily shared and collaborated on.
In the upper right-hand corner, click on the Login button.
Enter the same credentials you've been using for the rest of this workshop.
All authentication is handled via Red Hat Identity Management via enterprise LDAP which has native integrations with Hortonworks products.
Upon logging into Zeppelin, continue onto the next exercise in the workshop.