Facilities
San Pedro I Building and UTSA School of Data Science
Constructed in 2022, San Pedro I spans 167,000 square feet across six floors. Situated at 506 Dolorosa St. in downtown San Antonio, the building is positionedalong the San Pedro Creek, east of UTSA's Downtown Campus. It stands as a symbol of UTSA's commitment to anchoring itself to San Antonio's downtown core and expanding its urban footprint. As the primary home for the new UTSA School of Data Science, San Pedro I accommodates programs in areas including artificial intelligence, computer science, and data analytics. It also hosts at least 16 UTSA research centers, institutes, and college-level labs, such as the MATRIX AI Consortium for Human Well-being and MLO.
MILO Lab
MILO Lab is a high-tech hub for AI and machine learning research, situated in a dedicated space at the new UTSA San Pedro 1 building in downtown San Antonio. The lab’s computing prowess is led by an NVIDIA DGX A100 system, a comprehensive platform for all AI workloads. This system sets a new standard with 5 petaFLOPS of AI performance, encompassing 8x NVIDIA A100 Tensor Core GPUs each with 40 GB of memory, and a dual AMD Rome 7742 CPU with 128 cores. In addition to DGX A100, MILO Lab houses four Lambda Vector Threadripper Pro workstations, each outfitted with AMD Threadripper Pro 5955WX processors and dual RTX A6000 GPUs with 48 GB of memory each. Additionally, MILO Lab accommodates six Dell OptiPlex 7000 Small Form Factor systems, each powered by a 12th Generation Intel Core i7-12700 processor and featuring 32GB of DDR4 Non-ECC memory and a 512GB PCIe NVMe Class 35 Solid State Drive. These systems are well-suited for smaller numerical experiments, code prototyping/debugging, and report or presentation preparation.
MARTIX AI Consortium
MILO Lab is a member of the MATRIX AI Consortium: UTSA’s leading orgaSan Pedro I Building and UTSA School of Data Sciencenization for AI research. Offering a robust array of on-premises and cloud-based resources, MATRIX provides significant computational support to its members. Specifically, MATRIX has procured three NVIDIA DGX servers and three 8-GPU Lambda Blade deep learning servers to cater to the computational needs of its members. The servers are configured as a DGX cluster in a mini-pod setup, optimizing applications for running on a production cluster. Each NVIDIA DGX station is equipped with an Intel Xeon E5-2698 v4 2.2 GHz (20-Core) CPU, four NVIDIA Tesla V100-DGXS-32GB GPUs with 32 GB per GPU and a system memory of 256GB ECC RDIMM DDR4 SDRAM. The three Lambda workstations comprise two servers, each with 8x NVIDIA Quadro RTX 6000 GPUs, 768GB GPU memory, and 2x IntelXeon Gold 6230 CPUs (20 core, 2.10 GHz). The third server is configured with 8x NVIDIA Quadro RTX 8000 GPUs, 768GB GPU memory, and 2x Intel Xeon Gold 5218 CPUs (16 core, 2.10 GHz). Additionally, MATRIX provides researchers with both in-house and external guidance in establishing cloud-based AI computing workflows, along with preferred academic pricing with external vendors.
UTSA ARC Cluster
ARC Cluster is UTSA’s High Performance Computing (HPC) system and includes 169 total compute/GPU nodes and 2 login nodes, majority of these are Intel Cascade Lake CPUs and some are AMD EPYC CPUs:
30 GPU nodes, each containing two Intel CPUs with 20 cores each for a total of 40 cores, 384GB RAM, and each including one V100 Nvidia GPU accelerator.
5 GPU nodes, each containing two Intel CPUs with 20 cores each for a total of 40 cores, 384GB RAM, and each including two V100 Nvidia GPU accelerators.
2 GPU nodes, each containing two Intel CPUs and 4 V100 GPUs, and 384 GB RAM.
2 GPU nodes, each having two AMD EPYC CPUs and having one A100 80 GB GPU, and 1 TB RAM.
2 large-memory nodes, each containing four Intel CPUs with 20 cores each for a total of 80 cores, and each including 1.5TB of RAM.
1 large-memory node, equipped with two AMD EPYC CPUs and 2 TB of RAM.
1 node equipped with two AMD EPYC CPUs and having 1 TB of RAM.
5 nodes, each equipped with two AMD EPYC CPUs and 1 NEC vector engine and 1 TB of RAM
100 Gbps InfiniBand connectivity.
Two Lustre filesystems: /home and /work, where /home has 110 TBs capacity and /work has 1.1 PB of capacity.
A cumulative total of 250TB of local scratch (approximately 1.5 TB of scratch space on most compute/GPU nodes).
3 Nvidia DGX A100 Systems with 8x A100 80Gb GPU in each system and 2TB of memory each.