Services#
JupyterHub#
Documentation#
STATUS: In Testing
URL : https://jupyter.k8s.ucar.edu/
An on-premise JupyterHub is up and running, but is still in development. Authentication is handled via a GitHub team in the NCAR organization. The user space that is spun up has access to a shared NFS volume that is read-only as well as GLADE collections and campaign directories also mounted as read-only. The user notebooks that can be deployed are custom container images that the CCPP team maintains. Documentation on how this was setup and deployed can be found on the how-to page in this documentation.
Binder#
Documentation#
STATUS: In Testing
URL : https://binder.k8s.ucar.edu/
Binder is hosted On-Premise and has its own dedicated JupyterHub to deploy containerized code repositories. Authentication is handled via the same GitHub team in the NCAR organization as the JupyterHub. Binder also has access to GLADE campaign and collections as well as internal resources.
AWS instance supported by 2i2c#
STATUS: In Testing
URL : https://ncar-cisl.2i2c.cloud/
The JupyterHub instance setup and managed by 2i2c that runs on AWS is up and ready to use. Access to this JupyterHub is controlled via a GitHub team, specifically the NCAR organizations 2i2c-cloud-users team.
Virtualization#
Kubernetes (k8s)#
We have a kubernetes cluster that we can utilize to host containers. We can provide users a private namespace to deploy to. We can also provision full kubernetes clusters for users via Rancher. Users would be administrators of their own k8s clusters but would have more freedom to customize to their needs and requirements.
Rancher#
STATUS: In Testing
URL : https://rancher.k8s.ucar.edu/
We have an instance of Rancher running that provides a user interface for interacting with k8s clusters. Users can request access to the rancher instance, login, and get a kubeconfig to access and deploy k8s resources. User permissions are controlled in a way that limits what they can view and deploy to.
Harbor#
STATUS: In Testing
URL : https://hub.k8s.ucar.edu/
We utilize Harbor to provide a container registry based on open source software that is closer to the infrastructure running containers. A local registry allows us to utilize network infrastructure and available bandwidth between hardware for an increase in speed when pushing and pulling images locally. Harbor also includes an image scanner that will provide reports on any vulnerabilities that an image contains so we can address security concerns with images directly.
Argo CD#
STATUS: In Testing
We have an instance of Argo CD installed to help us handle Continuous Delivery (CD). What this ultimately means is if your applications Git repo is setup in Argo CD it can be automatically configured to deploy any changes made to that repository without any intervention by the user or admins. This allows users to deploy their applications automatically to k8s without having to worry about interacting directly with Kubernetes.
Virtual Machines (VMs)#
We can provide VMs to users as needed for tasks that aren’t well suited for running in a container.
Storage#
Persistent Volumes#
Rook is used to provide storage orchestration to k8s workloads. Rook utilizes Ceph as a distributed storage system to provide persistent file, block, and object storage capabilities to the k8s cluster and the underlying objects hosted.
GLADE Access#
Read only access to data stored on GLADE is provided via NFS to the K8s nodes which is then exposed to objects in the cluster via Rook. GLADE is managed by the by the Advanced Research Computing division of NSF NCAR | CISL.
Object Storage#
Object Storage is available via Stratus and our admins can create new buckets and assign user permissions for S3 interactions.
Network Services#
ExternalDNS#
Our system has ExternalDNS configured to provide DNS records for full name resolution and useable URLs for hosted systems.
cert-manager#
In order to make sure the URLs exposed are secure we implemented cert-manager to assign valid certificates to applications and perform lifecycle management on the issued certificates. This ensures all services are accessible only via HTTPS with valid certificates.
Nginx Ingress Controller#
The Kubernetes cluster exposes applications to the a network by utilizing the Nginx Ingress Controller. By combining the Ingress with ExternalDNS and cert-manager routable HTTPS addresses can be tied directly to deployed applications allowing the sharing of content to the greater community.