Yi Hong

The University of Georgia

CSCI 8955: Advanced Data Analytics: Statistical Learning and Optimization

Overview: The last decade has witnessed an increasing amount of data generated by modern applications, such as daily photos, videos, and medical image scans. There is a great need for techniques to learn from this data. In this course, we will discuss advanced topics in data analysis, with an emphasis on statistical learning and related optimization problems. In particular, we will cover learning methods (e.g., image regression, dictionary learning, random forests, and deep learning), numerical optimization approaches (e.g., the adjoint method, coordinate descent, and stochastic gradient descent), and their connections. The applications include prediction, classification, segmentation, and other tasks in image analysis. This course is targeted towards graduate students, and the lectures will be based on articles from journals and conferences in the field of computer vision and medical image analysis.

Course Information

  • Section number: 45978

  • Class meetings: Online (Asynchronous); lecture videos will be released in the mornings of the meeting days.

  • Instructor: Yi Hong (yi.hong -at- uga.edu)

  • Office hours: TR 10:00am - 11:00am or by appointment

  • Course webpage: http://cobweb.cs.uga.edu/~yihong/CSCI8955-Fall2020.html

Topics

  • Sparse representation, dictionary learning, and low-rank approximation

  • Random forests: classification forests and regression forests

  • Deep learning: convolutional neural networks (CNNs) and recurrent neural networks (RNNs)

  • Image and shape regression: cross-sectional and longitudinal studies

Prerequisites

No prior experience in computer vision or medical image analysis is required, while some exposure to image analysis, machine learning, or numerical computation is highly recommended.

Grading

This course will mix lectures with reading and discussion of related research papers and projects. Students should present their selected papers and work on a project related to the topics discussed in this course. In particular, the grading will be based on

  • Project (40%), including proposal (5%), update (5%), presentation (15%, pre-recorded video), and write-up (15%)

  • Paper presentation (25%, pre-recorded video submitted two days before the presentation day)

  • Topic summaries (20%, 5% for each topic)

  • Participation (15%), including study group (5%,submit questions three days before the presentation day), and presenation evaluation (10%, submit evaluation forms at 11:59pm ET on the presentation day).

There is no exam.

Late Policy: 1 day late (10% off), 2 (20% off), 3 (30% off). Late submissions are not accepted after 3 late days.

Grading scale: A: [93% - 100%], A-: [90% - 93%), B+: [87% - 90%), B: [83% - 87%), B-: [80% - 83%), C+: [77%, 80%), C:[70%, 77%), C-:[65% - 70%), D:[60% - 65%), F: [0% - 60%).

Tentative Schedule

Date Topic Reading Presenter To Do
Week 0 Aug 20 (R) Course Introduction and Overview -- Yi --
Week 1 Statistical Learning Basics
Aug 25 (T) Math Basics I [Goodfellow et al.] Chapter 2 Yi --
Aug 26 (W) Math Basics II [Goodfellow et al.] Chapter 3 Yi --
Aug 27 (R) Numerical Computation I [Goodfellow et al.] Chapter 4 Yi --
Week 2 Sep 1 (T) Numerical Computation II [Goodfellow et al.] Chapter 4 Yi --
Sep 2 (W) Machine Learning Basics [Goodfellow et al.] Chapter 5 Yi --
Topic 1: Sparse Represenation, Dictionary Learning, and Low-Rank Approximation
Sep 3 (R) Sparse Coding [Mairal et al.] Chapter 1 Yi --
Week 3 Sep 8 (T) Dictionary Learning [Mairal et al.] Chapter 1,2,3 Yi --
Sep 9 (W) Optimization for Sparse Coding and Dictionary Learning I [Mairal et al.] Chapter 5 Yi --
Sep 10 (R) Optimization for Sparse Coding and Dictionary Learning II [Mairal et al.] Chapter 5 Yi --
Week 4 Sep 15 (T) Potential Course Projects
Low Rank Approximation and Optimization I
[Udell et al.] Yi --
Sep 16 (W) Low Rank Approximation and Optimization II [Udell et al.] Yi Topic 1 Summary Due on Sunday, at 11:59pm ET
Topic 2: Random Forests
Sep 17 (R) Intro. to Random Forests [Criminisi et al.] TR-Chapter 2
[Hastie et al.] Chapter 15
Yi --
Week 5 Sep 22 (T) Classification Forests and Regression Forests [Criminisi et al.] TR-Chapter 3 & 4 Yi --
Sep 23 (W) Random Forests Applications "An introduction to random forests for multi-class object detection"
"Filter forests for learning data-dependent convolutional kernels"
Yi --
Sep 24 (R) Paper Reading "Incremental learning of random forests for large-scale image classification" Yi --
Week 6 Sep 29 (T) Paper Presentation I (topic 1)
Paper Reading
1. Top-down Visual Saliency via joint CRF and Dictionary Learning
Low-rank to the rescue: Atlas based analyses in the presence of pathologies
1. Padmanaban & Akarsh
Yi
1. Group 1
--
Sep 30 (W) Paper Reading "Linear spatial pyramid matching using sparse coding for image classification" Yi --
Oct 1 (R) Project Proposals -- All Submit 1-page proposal and 5-minute video presentation (due on Sep 29, at 11:59pm ET)
Week 7 Oct 6 (T) Paper Presentation I (Topic 2)
Paper Presentation II (Topic 2)
1. Narrowing the gap: Random forests in theory and in practice
2. Fast and accurate image upscaling with super-resolution forest
1. Denna & Himabindu
2. Zhongliang
1. Group 2
2. Group 3
Oct 7 (W) Paper Reading 1. Improved Random Forest for Classification 1. Yi 1. Group 4
Topic 2 Summary Due on Sunday, at 11:59pm ET
Topic 3: Deep Learning
Oct 8 (R) Deep Feedforward Networks [Goodfellow et al.] Chapter 6 Yi --
Week 8 Oct 13 (T) Regularization for Deep Learning [Goodfellow et al.] Chapter 7 Yi --
Oct 14 (W) Optimization for Deep Learning [Goodfellow et al.] Chapter 8 Yi --
Oct 15 (R) Convolutional Networks I [Goodfellow et al.] Chapter 9 Yi --
Week 9 Oct 20 (T) Convolutional Networks II [Goodfellow et al.] Chapter 9 Yi --
Oct 21 (W) Recurrent Neural Networks [Goodfellow et al.] Chapter 10 Yi --
Oct 22 (R) Paper Presentation I (Topic 3)
Paper Presentation II (Topic 3)
1. Understanding neural networks through deep visualization
2. Understanding deep learning requires rethinking generalization
1. Sixiang & Zhichen
2. Xubo & Yuanyi
1. Group 5
2. Group 6
Week 10 Oct 27 (T) Paper Presentation I (Topic 3)
Paper Presentation II (Topic 3)
1. Least squares generative adversarial networks
2. Explaining and harnessing adversarial examples
1. Suha & Jacob
2. Mohammed & Jonathan
1. Group 7
2. Group 8
Oct 28 (W) Autoencoders -- Yi --
Oct 29 (R) Paper Presentation I (Topic 3)
Paper Presentation II (Topic 3)
1. Batch normalization & Revisit Batch Normalization
2. Decision forests, convolutional networks and the models in-between
1. Chinmay & Kaustubh
2. Kranthimithra & Piyush
1. Group 1
2. Group 2
Week 11 Nov 3 (T) Paper Presentation I (Topic 3)
Paper Presentation II (Topic 3)
1. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion
2. Building high-level features using large scale unsupervised learning
1. Berta & Ben
2. Akhila & Jennifer
1. Group 3
2. Group 4
Nov 4 (W) Visualization for Deep Learning -- Yi --
Nov 5 (R) Project Update -- All Submit a 5-minute video presentation (Due on Nov 3, at 11:59pm ET)
Week 12 Nov 10 (T) Paper Presentation I (Topic 3)
Paper Presentation II (Topic 3)
1. Importance weighted autoencoders
2. Learning to continually learn
1. Zhixuan & Yikang
2. Jayant & Xiangyu
1. Group 5
2. Group 6
Nov 11 (W) Geometric Deep Learning -- Yi --
Nov 12 (R) Paper Presentation (Topic 3)
Paper Presentation II (Topic 3)
1. Fast, Faster, and Mask R-CNN
2. Semi-supervised knowledge transfer for deep learning from private training data
1. Ravi Jyani & Indrajeet
2. Sabri
1. Group 7
2. Group 8
Week 13 Nov 17 (T) Paper Presentation I (Topic 3)
Paper Presentation II (Topic 3)
1. Unsupervised representation learning with deep convolutional generative adversarial networks
2. Domain Adversarial Neural Network
1. Zhaiming & Jianwei
2. Saed
1. Group 1
2. Group 2
Topic 3 Summary Due on Sunday, at 11:59pm ET
Topic 4: Image and Shape Registration and Regression
Nov 18 (W) Image Registration [Zitova and Flusser 2003] Yi --
Nov 19 (R) Adjoint Methods -- Yi --
Week 14 Nov 24 (T) Paper Presentation I (Topic 3)
Paper Presentation II (Topic 3)
1. Structural-RNN: Deep Leanring on Spatio-Temporal Graphs
2. Deep decision network for multi-class image classification
1. Ravi Parashar & Kavit
2. Aditya & Ehsan
1. Group 3
2. Group 4
Nov 25-27 (W-F) Thanksgiving Holiday
Week 15 Dec 1 (T) Paper Presentation I (Topic 3)
Paper Presentation II (Topic 3)
1. Insights and approaches using deep learning to classify wildlife
2. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images
1. Shivam & Arshad
2. Rahul
1. Group 5
2. Group 7
Dec 2 (W) Image and Shape Regression [Niethammer et al. MICCAI 2011]
[Fletcher IJCV 2013]
Yi --
Dec 3 (R) Paper Presentation I (Topic 4)
Paper Reading
1. Shape Matching and Object Recognition Using Shape Contexts
2. Deep learning in medical image registration: a survey
1. Sathwik & Nikitha
2. Yi
1. Group 6
2. Group 8
Topic 4 Summary Due on Sunday, at 11:59pm ET
Week 16 Dec 9 (W) Project Presentation -- All Submit a 10-minute video presentation (Due on Dec 7, at 11:59pm ET)
Dec 15 (T) Project Write-UPs (8 page conference formatted paper)

Study Group

Group 1: Sixiang Zhang, Zhichen Yan, Xubo Jiang, Yuanyi Zhang, Zhongliang

Group 2: Indrajeet Javeri, Ravi Jyani, Chinmay Kulkami, Kaustubh Rajput, Shivam Chandan

Group 3: Zhaiming Shen, Jianwei Hao, Zhixuan Chu, YiKang Gui, Xiangyu Zhang

Group 4: Padmanaban Hariharasubramanian, Akarsh Hebbar, Denna Saji, Himabindu Pyata, Piyush Subedi

Group 5: Suha Mokalla, Jacob Rose, Mohammed Aldosari, Jonathan Vance, Saed Rezayidemne

Group 6: Berta Franzluebbers, Ben Flanders, Akhila Devabhaktuni, Jennifer Mathis, Sabri Sabri

Group 7: Jayant Parashar, Kranthimithra Payapraju, Sathwik Kondapuram, Nikitha Jambula, Rahul Vijayvargiya

Group 8: Ravi Parashar, Kavit Mehta, Aditya Shinde, Ehsan Asali, Arshad Bagodiya

Reading List

Papers (in black) will be assigned on a first-come-first-serve basis. You may also propose a paper that is not listed, but you must get it approved.

Sparse Representation, Dictionary Learning, and Low-Rand Approximation

Sparse coding & Dictionary learning

  1. Aharon et al., K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation, TSP 2006.
  2. Elad and Aharon, Image denoising via sparse and redundant representation over learned dictionaries, TIP 2006.
  3. Mairal et al., Online dictionary learning for sparse coding, ICML 2009.
  4. Wright et al., Robust face recognition via sparse representation, TPAMI 2009.
  5. Yang et al., Image super-resolution via sparse representation, TIP 2010.
  6. Wang et al. Semi-coupled dictionary learning with application to image super-resolution and photo-sketch synthesis, CVPR 2012.
  7. Mairal et al. Task-driven dictionary learning, TPAMI 2012.
  8. Gangeh et al., Supervised dictionary learning and sparse representation - a review, arXiv:1052.05928 2015.
  9. Bao et al., Dictionary learning for sparse coding: Algorithms and convergence analysis, TPAMI 2016.
  10. Yang and Yang, Top-Down Visual Saliency via Joint CRF and Dictionary Learning, TPAMI 2016.
  11. Garcia-Cardona and Wohlberg, Convolutional Dictionary Learning: A Comparative Review and New Algorithms, TCI 2017.

Low rank approximation

  1. Wright et al., Robust principal component analysis: Exact recovery of corrupted low-rank matrices by convex optimization, NIPS 2009.
  2. Cabral et al., Matrix completion for weakly-supervised multi-label image classification, TPAMI 2012.
  3. Zhang et al., Learning structured low-rank representation for image classification, CVPR 2013.
  4. Liu et al., Low-rank atlas image analyzes in the presence of pathologies, TMI 2015.
  5. Ge et al., Matrix Completion has No Spurious Local Minimum, NIPS 2016.

Random Forests

Local optimization

  1. Fanello et al., Filter forests for learning data-dependent convolutional kernels, CVPR 2014.
  2. Schulter et al., Fast and accurate image upscaling with super-resolution forests, CVPR 2015.
  3. Dollar and Zitnick, Fast edge detection using structured forests, TPAMI 2015.

Global optimization

  1. Schulter et al., Alternating decision forest, CVPR 2013.
  2. Schulter et al., Alternating regression forests for object detection and pose estimation, ICCV 2013.
  3. Ren et al., Global refinement of random forest, CVPR 2015.

Hybrid architecture

  1. Kontschieder et al., Deep neural decision forests, ICCV 2015.
  2. Xie et al., Aggregated Residual Transformations for Deep Neural Networks, arXiv:1611.05431 2016.
  3. Loannou et al., Decision forests, convolutional networks and the models in-between, arXiv:1603.01250 2016.
  4. Murthy et al., Deep Decision Network for Multi-Class Image Classification, CVPR 2016.

Understanding random forests

  1. Denil et al., Narrowing the gap: Random forests in theory and in practice, ICML 2014.
  2. Scornet et al., Consistency of random forests, The Annals of Statistics 2015.
  3. Scornet et al., On the asymptotics of random forests, Journal of Multivariate Analysis 2016.

Deep Learning

Overview

  1. LeCun et al., Deep learning, Nature 2015.
  2. Schmidhuber et al., Deep learning in neural networks: An overview, Neural Networks, 2015.
  3. Lipton et al., A critical review of recurrent neural networks for sequence learning, arXiv:1506.00019 2015.

Convolutional neural networks

  1. Alex et al., ImageNet classification with deep convolutional neural networks, NIPS 2012.
  2. Szegedy et al., Going deeper with convolutions, CVPR 2015.
  3. Long et al., Fully convolutional networks for semantic segmentation, CVPR 2015.
  4. He et al., Deep residual learning for image recognition, CVPR 2016.
  5. Dai et al., Deformable Convolutional Networks, arXiv:1703.06211 2017.
  6. Huang et al., Densely Connected Convolutional Networks, CVPR 2017.

Recurrent neural network

  1. Gregor et al., DRAW: A recurrent neural network for image generation, arXiv:1502.04623 2015.
  2. Srivastava et al., Unsupervised learning of video representations using LSTMs, ICML 2015.
  3. Jain et al., Structural-RNN: Deep Learning on Spatio-Temporal Graphs, CVPR 2016.
  4. Chung et al., Hierarchical Multiscale Recurrent Neural Networks, arXiv:1609.01704 2016.

Autoencoder

  1. Vincent et al., Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion, JMLR 2010.
  2. Le et al., Building high-level features using large scale unsupervised learning, ICML 2012.
  3. Kingma et al., Auto-encoding variational Bayes, ICLR 2014.
  4. Burda et al., Importance weighted autoencoders, ICLR 2016.

Deep structured models

  1. Schwing and Urtasun, Fully connected deep structured networks, arXiv:1503.02351 2015.
  2. Chen et al., Learning deep structured models, ICML 2015.
  3. Zheng et al., Conditional random fields as recurrent neural networks, ICCV 2015.

Generative Adversarial Networks (GANs)

  1. Goodfellow et al., Generative adversarial nets, NIPS 2014.
  2. Radford et al., Unsupervised representation learning with deep convolutional generative adversarial networks, arXiv:1511.06434 2016.
  3. Oord et al., Pixel Recurrent Neural Networks, ICML 2016.
  4. Mao et al., Least squares generative adversarial networks, arXiv:1611.04076 2017.
  5. Shrivastava et al., Learning from Simulated and Unsupervised Images through Adversarial Training, CVPR 2017.

Training deep neural networks

  1. Srivastava et al., Dropout: A simple way to prevent neural networks from overfitting, JMLR 2014.
  2. Ioffe et al., Batch normalization: Accelerating deep network training by reducing internal covariate shift, arXiv:1502.03167 2015.
  3. Papernot et al., Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data, ICLR 2017.
  4. Goyal et al., Accurate, Large Minibatch SGD:Training ImageNet in 1 Hour, arXiv:1706.02677 2017.

Object Detection & Annotation

  1. Girshick et al., Rich feature hierarchies for accurate object detection and semantic segmentation, CVPR 2014.
  2. Girshick R., Fast R-CNN, ICCV 2015.
  3. Ren et al., Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks, TPAMI 2017.
  4. Redmon et al., You only look once: Unified, real-time object detection, CVPR 2016.
  5. Liu et al., SSD: Single Shot Multibox Detector, ECCV 2016.
  6. Redmon and Farhadi. YOLO9000: Better, Faster, Stronger, ICCV 2017.
  7. Castrejon et al., Annotating Object Instances with a Polygon-RNN, ICCV 2017.

Deep Learning in Medical Image Analysis

  1. Ronneberger et al., U-net: Convolutional Networks for Biomedical Image Segmentation, MICCAI 2015.
  2. Shen et al., Multi-scale Convolutional Neural Networks for Lung Nodule Classification, IPMI 2015.
  3. Milletari et al., V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation, arXiv:1606.04797 2016.
  4. Albarqouni et al., AggNet: Deep Learning From Crowds for Mitosis Detection in Breast Cancer Histology Images, TMI 2016.
  5. Setio et al., Pulmonary Nodule Detection in CT Images: False Positive Reduction Using Multi-View Convolutional Networks, TMI 2016.
  6. Dou et al., Automatic Detection of Cerebral Microbleeds From MR Images via 3D Convolutional Neural Networks, TMI 2016.
  7. Havaei et al., Brain tumor segmentation with Deep Neural Networks, Medical Image Analysis 2017.
  8. Litjens et al., A Survey on Deep Learning in Medical Image Analysis, arXiv:1702.05747 2017.
  9. Esteva et al., Dermatologist-level classification of skin cancer with deep neural networks, Nature 2017.

Understanding deep learning & Visualization

  1. Szegedy et al., Intriguing properties of neural networks, arXiv:1312.6199 2013.
  2. Goodfellow et al., Explaining and harnessing adversarial examples, arXiv preprint arXiv:1412.6572 2014.
  3. Simonyan et al., Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps, ICLR workshop 2014.
  4. Zeiler et al., Visualizing and understanding convolutional networks, ECCV 2014.
  5. Nguyen et al., Deep neural networks are easily fooled: High confidence predictions for unrecognizable images, CVPR 2015.
  6. Yosinski et al., Understanding neural networks through deep visualization, ICML 2015.
  7. Zhang et al., Understanding deep learning requires rethinking generalization, ICLR 2017.

Image and Shape Regression

Image registration

  1. Zitova and Flusser, Image registration methods: a survey, Image and Vision Computing 2003.
  2. Beg et al., Computing large deformation metric mappings via geodesic flows of diffeomorphisms, IJCV 2005.
  3. Brown et al., Multi-image matching using multi-scale oriented patches, CVPR 2005.
  4. Hart et al., An optimal control approach for deformable registration, CVPR Workshop 2009.

Geodesic regression

  1. Niethammer et al., Geodesic regression for image time-series, MICCAI 2011.
  2. Fletcher, T., Geodesic regression and the theory of least squares on Riemannian manifolds, IJCV 2013.
  3. Singh et al., A vector momenta formulation of diffeomorphisms for improved geodesic regression and atlas construction, ISBI 2013.
  4. Fishbaugh et al., Geodesic shape regression in the framework of currents, IPMI 2013.

Higher-order models

  1. Hinkle et al., Intrinsic polynomials for regression on Riemannian manifolds, JMIV 2014.
  2. Singh et al., Splines for diffeomorphisms, MedIA 2015.
  3. Hong et al., Parametric regression on the Grassmannian, TPAMI 2016.

Longitudinal studies

  1. Singh et al., A hierarchical geodesic model for diffeomorphic longitudinal shape analysis, IPMI 2013.
  2. Durrleman et al., Toward a comprehensive framework for the spatiotemporal statistical analysis of longitudinal shape data, IJCV 2013.
  3. Schiratti et al., A mixed effect model with time reparametrization for longitudinal univariate manifold-valued data, IPMI 2015.

Kernel regression

  1. Davis et al., Population shape regression from random design data, ICCV 2007.
  2. Banerjee et al., A nonlinear regression technique for manifold valued data with applications to medical image analysis, CVPR 2016.

Reference Books and Resources

  • Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Second Edition, Springer.

  • Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. In preparation for MIT Press. http://www.deeplearningbook.org.

  • Julien Mairal, Francis Bach, and Jean Ponce. Sparse Modeling for Image and Vision Processing. Now Publishers, 2014.

  • Antonio Criminisi, Jamie Shotton, and Ender Konukoglu. Decision Forests for Classification, Regression, Density Estimation, Manifold Learning and Semi-Supervised Learning. Microsoft Research technical report TR-2011-114.

  • Antonio Criminisi and Jamie Shotton. Decision Forests for Computer Vision and Medical Image Analysis. Springer 2013.

  • Madeleine Udell, Corinne Horn, Reza Zadeh, and Stephen Boyd. Generalized Low Rank Models. Foundations and Trends® in Machine Learning, 2016.

Useful Links

Dectionary Learning and Low-Rank Approximation

Random Forests

Deep Learning

Abbreviations

  • Journals – IJCV: International Journal of Computer Vision,   TPAMI: IEEE Transactions on Pattern Analysis and Machine Intelligence,   TSP: IEEE Transactions on Signal Processing,   JMIV: Journal of Mathematical Imaging and Vision,   TIP: IEEE Transactions on Image Processing,   TMI: IEEE Transactions on Medical Imaging,   MedIA: Medical Image Analysis,   JMLR: Journal of Machine Learning Research.

  • Conferences – MICCAI: Medical Image Computing and Computer Assisted Intervention,   IPMI: Information Processing in Medical Imaging,   ISBI: International Symposium on Biomedical Imaging,   ICML: International Conference on Machine Learning, CVPR: IEEE Conference on Computer Vision and Pattern Recognition,   ECCV: European conference on Computer Vision,   ICCV: International Conference on Computer Vision,   NIPS: Neural Information Processing Systems,   ICLR: International Conference on Learning Representations.

Academic Honesty

UGA Student Honor Code: “I will be academically honest in all of my academic work and will not tolerate academic dishonesty of others”. A Culture of Honesty, the University's policy and procedures for handling cases of suspected dishonesty, can be found at www.uga.edu/ovpi.

Mental Health and Wellness Resources

  • If you or someone you know needs assistance, you are encouraged to contact Student Care and Outreach in the Division of Student Affairs at 706-542-7774 or visit https://sco.uga.edu/. They will help you navigate any difficult circumstances you may be facing by connecting you with the appropriate resources or services.

  • UGA has several resources for a student seeking mental health services (https://www.uhs.uga.edu/bewelluga/bewelluga) or crisis support (https://www.uhs.uga.edu/info/emergencies).

  • If you need help managing stress anxiety, relationships, etc., please visit BeWellUGA (https://www.uhs.uga.edu/bewelluga/bewelluga) for a list of FREE workshops, classes, mentoring, and health coaching led by licensed clinicians and health educators in the University Health Center.

  • Additional resources can be accessed through the UGA App.

Disclaimer

The course syllabus is a general plan for the course; deviations announced to the class by the instructor may be necessary.