Link Search Menu Expand Document (external link)

This page is currently under construction. Information will be updated soon

Architecture

TIDE, is an NSF-backed resource (award #2346701) and is part of the National Research Platform Nautilus platform. The award was given for SDSU to invest in hardware to support instructional use-cases and hosts the hardware in its campus data center. It is a distributed Kubernetes-based compute environment supporting CPU and GPU workloads. The resources are available to researchers to use for free for both funded and unfunded researcher projects. SDSU is given priority before being available for other users of Nautilus.

Compute

SDSU hosts 27 Dell PowerEdge servers supporting TIDE.

Node Type Quantity Specifications for each Node
GPU 17 PowerEdge R760
(2x) Intel Xeon SIlver 4410Y 2G, 12C/24T
(4x) Nvidia L40, 48 GB RAM
512 GB System RAM
GPU 1 PowerEdge R750XA
(2x) Intel Xeon Gold 6338 2G CPU, 32C/64T
(4x) Nvidia A100 GPU, 80 GB RAM
512 GB System RAM
CPU 6 PowerEdge R760
(2x) Intel Xeon Gold 6430 2.1G, 32C/64T
768 GB System RAM
Storage 3 PowerEdge R760
(3x) Intel Xeon Gold 6442Y 2.6G, 24C/48T
240 TB Storage
256 GB System RAM

Storage

Each user of TIDE is provided up to 50 GB of storage space accessible from JupyterHub but more storage can be granted upon request. The space is provided by a LINSTOR filesystem hosted on three storage nodes. It is recommended that persistent storage should be mantained elsewhere, ideally on the cloud.

Networking

Nautilus requires a Science DMZ network architecture to integrate into the fabric. SDSU has implemented an instructional and enterprise Science DMZ architecture to isolate research traffic using 2×10 Gb CENIC CalREN-HPR uplinks. Therefore, TIDE nodes make use of SDSU’s Science DMZ providing 100G connectivity to CENIC’s HPR network and 10G connectivity to its DC network.

Note that for traffic not destined for CalREN-HPR, a 20 Gb interconnect is used to direct traffic over SDSU’s 100 Gb CENIC CalREN-DC connection.