-
Notifications
You must be signed in to change notification settings - Fork 3
Expand file tree
/
Copy pathindex.html
More file actions
305 lines (248 loc) · 17.7 KB
/
index.html
File metadata and controls
305 lines (248 loc) · 17.7 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<title>Trustworthy Artificial Intelligence / Machine Learning Course: Professor Birhanu Eshete</title>
<!-- Bootstrap CSS -->
<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css" integrity="sha384-ggOyR0iXCbMQv3Xipma34MD+dH/1fQ784/j6cY/iJTQUOhcWr7x9JvoRxT2MZw1T" crossorigin="anonymous">
</head>
<body>
<!-- Main Content -->
<div class="container mt-4">
<table class="table table-bordered">
<thead style="background-color:#00274C;color:#FFCB05" align ="center" border="0">
<tr id="section1">
<th align="center">
<h1>CIS 482/582: Trustworthy Artificial Intelligence</h1>
<h2>University of Michigan, Dearborn</h2>
</th>
</tr>
</thead>
</table>
<div class="alert alert-primary" role="alert" style="background-color:#FFCB05;color:#00274C">
<b>Course Information:</b>
<ul>
<li><b>Current Offering Term</b>: Winter 2024, University of Michigan, Dearborn</li>
<li><b>Instructor</b>: <a href ="https://birhanu-eshete.github.io" target="_blank">Prof. Birhanu Eshete</a>: <a href="mailto:birhanu@umich.edu">birhanu@umich.edu</a>; Office: CIS 229</li>
<li><b>Teaching Assistant(s)</b>: Abe Amich: <a href="mailto:aamich@umich.edu">aamich@umich.edu</a></li>
<li><b>Time</b>: Mondays 6pm - 8:45pm</li>
<li><b>Venue</b>: HPEC 1181</li>
<li><b>Office hours</b>: Monday: 3pm - 4:30pm or by (virtual/in-person) appointment</li>
<li><b>Canvas</b>: If you are a UMICH-Dearborn student enrolled in this course, <a href="https://canvas.umd.umich.edu/courses/537448" target="_blank">access specifics here with umich.edu credentials</a></li>
</ul>
</div>
<div class="alert alert-primary" role="alert" style="background-color:#FFCB05;color:#00274C">
<p><b>Course Description</b>:
This course introduces students to the broad and emerging notion of trustworthy artificial intelligence (AI). Beginning with a hands-on introduction to the basics of Deep Neural Networks (DNNs) and modeling, it will cover three broad areas of trustworthiness in AI. In the first area of robustness, the course will introduce students to the AI threat landscape focusing on training data poisoning, model evasion, privacy-sensitive data inference, model stealing/extraction, and threats to safe deployment of AI. In the second area of transparency, students will be introduced to frameworks used to interpret/explain AI model’s decisions. In the third area of accountability, students will learn methods and tools for reducing bias and ethical pitfalls when AI models are deployed in high-stakes application domains. The course concludes with a broader take on AI trustworthiness by studying the dynamics among the three broad AI trustworthiness desirables. The course will be taught in a predominantly project-based setting to allow students gain hands-on experience beyond conceptual understanding.</p>
</div>
<div class="alert alert-primary" role="alert" style="background-color:#FFCB05;color:#00274C">
<p><b>On Prerequisites</b>: While prior knowledge of machine learning is not required, it will be a plus. To level the ground for everyone, the course will kick-off with a ML crash course just enough to understand subsequent material. Students are expected to have proficiency in at least one programming language (e.g., Python, C/C++, Java). Knowledge of data structures such as trees and graphs would be a plus.</p>
</div>
<div class="alert alert-primary" role="alert" style="background-color:#FFCB05;color:#00274C">
<p> <b>Reference Materials</b>: This course doesn’t have a dedicated textbook. However, we will use the following three books as our main references. In addition to these books, the course will heavily rely on influential papers for each topic discussed.</p>
<ol>
<li>Trustworthy Machine Learning by Kush R. Varshney, Independently Published, 2022: <a href= "http://www.trustworthymachinelearning.com/trustworthymachinelearning.pdf">here</a></li>
<li>Adversarial Machine Learning by Joseph, Nelson, Rubinstein, and Tygar: <a href="https://www.cambridge.org/core/books/adversarial-machine-learning/C42A9D49CBC626DF7B8E54E72974AA3B">here</a></li>
<li>Fairness in Machine Learning: Limitations and Opportunities by Solon Barocas, Moritz Hardt, Arvind Narayanan: <a href="https://fairmlbook.org/">here</a></li>
</ol>
</div>
<div class="alert alert-danger" role="alert" style="background-color:#FFCB05;color:#00274C">
<p>
<b>On Scope</b>: While this course is about AI/ML, it does not cover formalisms or technical details of ML or Deep Neural Networks. Deep learning fundamentals just enough to grasp subsequent topics are introduced at the beginning of the course. This course is intentionally broad so as to reason about ML trustworthiness beyond ML in the presence of adversaries. It is organized in a manner that expands the focus beyond ML security and privacy to safety, transparency, fairness, and ethical implications of AI/ML deployed in high-stakes application domains. Given the natural focus on breadth instead of depth, emphasis is more on representative trustworthiness risks/pitfalls and remedies/best practices, and the dynamics thereof. The AI/ML trustworthiness field is work-in-progress as it pertains to: techniques, tools, and regulatory provisions. In light of this ongoing evolution, I plan to update the material to keep up with collective progress made by academia, industry, government, and public interest technology/policy initiatives.
</p>
</div>
<!-- Table -->
<h4>Schedule and Materials</h4>
<p style="color:#00274C">Take the below schedule as tentative, depending on progress it will be updated as the semester advances.</p>
<table class="table table-bordered">
<thead style="background-color:#00274C;color:#FFCB05">
<tr id="section1">
<th>Week</th>
<th>Topic</th>
<th>Slides/Demos</th>
<th>Resources/Suggested Reading</th>
</tr>
</thead>
<tbody>
<!-- Row 1 -->
<tr>
<td>1</td>
<td>Motivation and Intro</td>
<td>
<a href= "https://drive.google.com/file/d/17I6U_VqB1QxDaRzBeM_kbSkcqwiatQhs/view?usp=sharing" target="_blank">Slides</a><br>
<a href= "https://www.youtube.com/watch?v=VUGG2wpTDSA" target="_blank">Video</a>
</td>
<td>
<ol>
<li><a href="https://github.com/trustworthy-ml-course/trustworthy-ml-course.github.io/blob/main/pdfs/making-machine-learning-trustworthy-birhanu-eshete-science.abi5052.pdf" target="_blank">Birhanu Eshete, Making Machine Learning Trustworthy</a></li>
<li><a href="https://krvarshney.github.io/pubs/Varshney_xrds2019.pdf" target = "_blank">Kush R. Varshney, Trustworthy Machine Learning and Artificial Intelligence</a></li>
</ol>
</td> </tr>
<!-- Row 2 -->
<tr>
<td>2</td>
<td>A Crash Course on Deep Neural Networks</td>
<td>
<a href= "https://drive.google.com/file/d/1r5FaJDr1Vi7m5chzpl_QlKkNdR23pGzS/view?usp=sharing" target="_blank">Slides</a><br>
<a href= "https://www.youtube.com/watch?v=f1uPWbh6DvU" target="_blank">Video</a><br>
<a href= "https://github.com/trustworthy-ml-course/trustworthy-ml-course.github.io/blob/main/demos/Demo-1.ipynb" target="_blank">Demo</a
</td>
<td>
<ol>
<li><a href="https://fleuret.org/public/lbdl.pdf" target="_blank">François Fleuret. The Little Book of Deep Learning</a></li>
</ol>
</td>
</tr>
<!-- Row 3.5 -->
<tr>
<td></td>
<td>Machine Learning Attack Surface</td>
<td>No separate lecture for this: it is covered within adversarial examples, training data poisoning, membership inference, and model stealing</td>
<td>
<ol>
<li><a href="https://oaklandsok.github.io/papers/papernot2018.pdf" target="_blank"> Papernot et al., SoK: Security and Privacy in Machine Learning</a></li>
<li><a href="https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-2e2023.pdf" target="_blank">Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations</a></li>
</ol>
</td> </tr>
<!-- Row 4 -->
<tr>
<td>3</td>
<td>Adversarial Examples</td>
<td>
<a href= "https://drive.google.com/file/d/11xFyl0ZfjRzN9aAsy2mnvjRrPcAHnSxn/view?usp=sharing">Slides</a> <br>
<a href= "https://youtu.be/4hf362X7SJ0">Video</a> <br>
<a href= "https://github.com/trustworthy-ml-course/trustworthy-ml-course.github.io/blob/main/demos/Demo-2.ipynb" target="_blank">Demo</a>
</td>
<td>
<ol>
<li><a href="https://arxiv.org/pdf/1312.6199.pdf" target="_blank">Szegedy et al., Intriguing properties of neural networks</a></li>
<li><a href="https://arxiv.org/pdf/1602.02697.pdf" target="_blank">Papernot et al., Practical Black-Box Attacks against Machine Learning</a></li>
<li><a href="https://arxiv.org/pdf/1707.08945.pdf" target="_blank">Eykholt et al., Robust Physical-World Attacks on Deep Learning Visual Classification</a></li>
<li><a href="https://arxiv.org/pdf/1412.6572.pdf" target="_blank">Goodfellow et al., Explaining and Harnessing Adversarial Examples</a></li>
<li><a href="https://arxiv.org/pdf/2108.13952.pdf" target="_blank">Amich and Eshete, Morphence: Moving Target Defense Against Adversarial Examples</a></li>
</ol>
</td> </tr>
<!-- Row 5 -->
<tr>
<td>4</td>
<td>Training Data Poisoning</td>
<td><a href= "https://drive.google.com/file/d/1fVEKja_l5okzmBWv9xUSQuj80akN3i8Y/view?usp=sharing">Slides</a><br>
<a href= "https://youtu.be/65AtNoB3AWE">Video</a><br>
<a href= "https://github.com/trustworthy-ml-course/trustworthy-ml-course.github.io/blob/main/demos/Demo-3.ipynb" target="_blank">Demo</a>
</td>
<td>
<ol>
<li><a href="https://arxiv.org/pdf/1708.06733.pdf" target="_blank">Gu et al., BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain</a></li>
<li><a href="https://arxiv.org/pdf/1712.05526.pdf" target="_blank">Chen et al., Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning</a></li>
<li><a href="https://arxiv.org/abs/2110.06904" target="_blank">Shan et al., Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks</a></li>
</ol>
</td> </tr>
<!-- Row 6 -->
<tr>
<td>5</td>
<td>Membership Inference</td>
<td>
<a href= "https://drive.google.com/file/d/1c755aTE-BxQof1PKQYdHpZmFOyAkP7f3/view?usp=sharing">Slides</a><br>
<a href= "https://youtu.be/fyNf4NiJNgUg">Video</a><br>
<a href= "https://github.com/trustworthy-ml-course/trustworthy-ml-course.github.io/blob/main/demos/Demo-4.ipynb" target="_blank">Demo</a>
</td>
<td>
<ol>
<li><a href="https://arxiv.org/pdf/1610.05820.pdf" target="_blank">Shokri et al., Membership Inference Attacks against Machine Learning Models</a></li>
<li><a href="https://arxiv.org/pdf/1610.05755.pdf" target="_blank">Papernot et al., Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data</a></li>
<li><a href="https://arxiv.org/pdf/1607.00133.pdf" target="_blank">Abadi et al., Deep Learning with Differential Privacy</a></li>
<li><a href="https://petsymposium.org/popets/2023/popets-2023-0024.pdf" target="_blank">Jarin and Eshete, MIAShield: Defending Membership Inference Attacks via Preemptive Exclusion of Members</a></li>
</ol>
</td> </tr>
<!-- Row 7 -->
<tr>
<td>6</td>
<td>Model Extraction</td>
<td>
<a href= "https://drive.google.com/file/d/1HuvvQ7qvIFTZTMDv0KjgxSjD7LYaK3KH/view?usp=sharing">Slides</a><br>
<a href= "https://youtu.be/V6kjVPLDno4">Video</a>
</td>
<td>
<ol>
<li><a href="https://www.usenix.org/system/files/conference/usenixsecurity16/sec16_paper_tramer.pdf" target="_blank">Tramer et al., Stealing Machine Learning Models via Prediction APIs</a></li>
<li><a href="https://arxiv.org/pdf/2006.15725.pdf" target="_blank">Ali and Eshete,Best-Effort Adversarial Approximation of Black-Box Malware Classifier</a></li>
<li><a href="https://arxiv.org/pdf/2002.12200.pdf" target="_blank">Jia et al., Entangled Watermarks as a Defense against Model Extraction</a></li>
</ol>
</td> </tr>
<!-- Row 8 -->
<tr>
<td>7</td>
<td>Transparency and Interpretability</td>
<td><a href= "https://drive.google.com/file/d/1N2uHUbR6fMcIlVev8BlYgcSYgpTX8rlj/view?usp=sharing">Slides</a><br>
<a href="https://youtu.be/FbxUWJbwqtQ">Video</a><br>
<a href= "https://github.com/trustworthy-ml-course/trustworthy-ml-course.github.io/blob/main/demos/Demo-5.ipynb" target="_blank">Demo</a>
</td>
<td>
<ol>
<li><a href="https://arxiv.org/pdf/1811.10154.pdf" target="_blank">Rudin Cynthia. Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead</a></li>
<li><a href="https://www.kdd.org/kdd2016/papers/files/rfp0573-ribeiroA.pdf" target="_blank">Ribeiro et al., “Why Should I Trust You?” Explaining the Predictions of Any Classifier</a></li>
<li><a href="https://arxiv.org/pdf/1705.07874.pdf" target="_blank">Scott Lundberg, Su-In Lee. A Unified Approach to Interpreting Model Predictions</a></li>
</ol>
</td> </tr>
<!-- Row 9 -->
<tr>
<td>8</td>
<td>Fairness</td>
<td><a href= "https://drive.google.com/file/d/1doqTz6sBhHSS2xEThuuLtBtn57Zr4vXR/view?usp=sharing">Slides</a><br>
<a href="https://youtu.be/VMBE2mgpxH8">Video</a><br>
<a href= "https://github.com/trustworthy-ml-course/trustworthy-ml-course.github.io/blob/main/demos/Demo-6.ipynb" target="_blank">Demo</a>
</td>
<td>
<ol>
<li><a href="https://arxiv.org/pdf/1104.3913.pdf" target="_blank">Dwork et al., Fairness Through Awareness</a></li>
<li><a href="https://www.cs.toronto.edu/~toni/Papers/icml-final.pdf" target="_blank">Zemel et al., Learning Fair Representations</a></li>
<li><a href="https://arxiv.org/pdf/1610.02413.pdf" target="_blank">Hardt et al., Equality of Opportunity in Supervised Learning</a></li>
<li><a href="https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf" target="_blank">Buolamwini and Gebru, Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification</a></li>
</ol>
</td> </tr>
<!-- Row 10 -->
<tr>
<td>9</td>
<td>Ethics and Governance</td>
<td>
<a href= "https://drive.google.com/file/d/1z7Jevj8BoheNHzFzzydF-DXTeO4exfA3/view?usp=sharing">Slides</a><br>
<a href= "https://youtu.be/G4HDz7l2sZo">Video</a>
</td>
<td>
<ol>
<li><a href="https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf" target="_blank">NIST: AI Risk Management Framework (AI RMF 1.0)</a></li>
<li><a href="https://dl.acm.org/doi/pdf/10.1145/3531146.3533088" target="_blank">Weidinger et. al., Taxonomy of Risks posed by Language Models</a></li>
</ol>
</td> </tr>
<!-- Row 11 -->
<tr>
<td>10</td>
<td>Holistic Trustworthiness Considerations and Open Issues </td>
<td><a href= "https://drive.google.com/file/d/1LMErHOz97vsfz4JSQKPIEzLkDwiskTOG/view?usp=sharing">Slides</a></td>
<td>
<ol>
<li><a href="" target="_blank"></a></li>
</ol>
</td> </tr>
</tbody>
</table>
<div>
<b>Similar Courses</b>:
Below are similar courses on the topic of trustworty AI/ML. Depending on who teaches a course and the institution, depth and breadth of topics may vary.
<ul>
<li><a href = "https://secure-ai.systems/courses/MLSec/Sp23/syllabus.html" target="_blank">CS 499/579: Trustworthy Machine Learning</a></li>
<li><a href = "https://web.stanford.edu/class/cs329t/" target="_blank">CS 329T: Trustworthy Machine Learning</a></li>
<li><a href = "https://www.papernot.fr/teaching/f22-trustworthy-ml.html" target="_blank">ECE1784H/CSC2559H: Trustworthy Machine Learning</a></li>
<li><a href = "https://cseweb.ucsd.edu/classes/sp20/cse291-b/index.html" target="_blank">CSE 291 Section B: Topics in Trustworthy Machine Learning</a></li>
</ul>
</div>
<hr>
<p>© Birhanu Eshete 2024 </p>
</div>
<!-- Bootstrap JS and Popper.js -->
<script src="https://code.jquery.com/jquery-3.3.1.slim.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.7/umd/popper.min.js"></script>
<script src="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js"></script>
</body>
</html>