-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathindex.html
More file actions
420 lines (396 loc) · 28.3 KB
/
index.html
File metadata and controls
420 lines (396 loc) · 28.3 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
<meta name="description" content="Peidong Liu's home page" charset="utf-8">
<link rel="stylesheet" href="file/index.css" type="text/css">
<link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.3.1/css/all.css" integrity="sha384-mzrmE5qonljUremFsqc01SB46JvROS7bZs3IO2EmfFsd15uHvIt+Y8vEf7N7fWAU" crossorigin="anonymous">
<script async src="//busuanzi.ibruce.info/busuanzi/2.3/busuanzi.pure.mini.js"></script>
<title>Peidong Liu (DJI Automotive)</title>
<body>
<div id="layout-content" style="margin-top:25px">
<table>
<tbody>
<tr>
<td width="670">
<div id="toptitle">
<h1>Peidong Liu<img src="file/lpd_chinese.png" height="40px" style="margin-bottom:-10px;margin-left:20px"></h1>
<!-- <a href="file/CV_CN.pdf" target="_blank"><font size="2px">[中文简历]</font></a> -->
<a href="https://scholar.google.com/citations?user=pNBIQ8wAAAAJ&hl=en" target="_blank"><font size="2px">[Google Scholar]</font></a>
<!-- <a href="homepage_CN.html"><font size="2px">[中文版主页]</font></a> -->
<!-- <a href="https://github.com/PerdonLiu" target="_blank"><font size="2px">[Github]</font></a> -->
</div>
<h3>Advanced Algorithm Engineer in DJI Automotive</h3>
<p>
Shenzhen, Guangdong, China. <br>
<!-- Department of Computer Science and Technology<br> Tsinghua University <br> Beijing, China. 100091.<br> -->
<br> <b style="unicode-bidi:bidi-override; direction: rtl;">moc.liamg@uil.nodrep :liamE</span>
<br>
</p>
</td>
<td>
<img src="file/photo.jpeg" border="0" width="200">
</td>
</tr>
<tr>
</tr>
</tbody>
</table>
<h2>Biography</h2>
<p>
I am currently a lead for the visual-language-action model (VLA) at the DJI Automotive Perception Group and especially focus on fine-tuning VLA to address the challenges posed by long-tailed scenarios. Before that, I was primarily responsible for the Bird's Eye View (BEV) lane detection and large-scale multimodal retrieval systems. If you are interested in an internship opportunity, please feel free to drop me an email.
</p>
<p>
I obtained my M.S. in Computer Science from Tsinghua University in 2022, as an outstanding graduate. I have been fortunate to closely work with Prof. <a href="https://scholar.google.com/citations?user=voxznZAAAAAJ&hl=en&oi=ao" target=_blank>Xiaodan Liang</a> at Sun Yat-sen University, Dr. <a href="https://scholar.google.com/citations?user=J_8TX6sAAAAJ&hl=en&oi=ao" target=_blank>Hang Xu</a> at Huawei Noah's Ark Lab, Dr. <a href="https://scholar.google.com/citations?user=PnNAAasAAAAJ&hl=en&oi=ao" target=_blank>Litong Feng</a> and Dr. <a href="https://scholar.google.com/citations?user=q4lnWaoAAAAJ&hl=en&oi=ao" target=_blank>Xinjiang Wang</a> at SenseTime Research. I received my B.S. in Software Engineering from Sun Yat-sen University summa cum laude in 2019. My research interest lies in computer vision and visual-language model.
</p>
<!-- After I obtain master's degree, I will join DJI Automotive as an algorithm engineer for perception, led by Dr. <a href="https://scholar.google.com.hk/citations?user=GArEeWQAAAAJ" target=_blank>Xiaozhi Chen</a> and Prof. <a href="https://scholar.google.com/citations?user=u8Q0_xsAAAAJ" target=_blank>Shaojie Shen</a>, and contribute to the field of autonomous driving. My supervisor is Prof. <a href="https://scholar.google.com/citations?hl=zh-CN&user=koAXTXgAAAAJ" target=_blank>Shu-Tao Xia</a>. -->
<!-- (led by Dr. <a href="https://scholar.google.com.hk/citations?user=GArEeWQAAAAJ" target=_blank>Xiaozhi Chen</a> and Prof. <a href="https://scholar.google.com/citations?user=u8Q0_xsAAAAJ" target=_blank>Shaojie Shen</a>) -->
<h2>News</h2>
<ul>
<li>(2025-04) The VLA capabilities are publicly demonstrated at the 2025 Shanghai Auto Show (see <a href="https://mp.weixin.qq.com/s/Y3RCZjBP4nPBRq3sAa3q1Q">wechat link</a> for details).</li>
<li>(2024-01) I am awarded 2023 Annual Efficiency Vanguard Award at DJI Automotive for the outstanding contributions.</li>
<!-- <li><b style="color:red">I am actively looking for Ph.D. position worldwide starting Fall 2022.</b></li> -->
<li>(2022-07) Two of our works are accepted by ECCV2022.</li>
<li>(2022-06) I am awarded both University-wise (Top 1%) and Department-wise (Top 5%) Outstanding Graduate at Tsinghua University.</li>
<li>(2022-06) I am awarded Outstanding Master's Thesis Award at Tsinghua University (Top 5%).</li>
<li>(2021-10) I am awarded National Scholarship for Postgraduate at Tsinghua University (Top 1%).</li>
<li>(2021-07) Our work is accecpted by ACM MM2021 as an Oral paper.</li>
<li>(2021-02) I am invited to give a talk about our ICLR2021 paper in QingYuan (青源 in Chinese) Seminar, organized by <a href="https://www.baai.ac.cn/">Beijing Academy of Artificial Intelligence (BAAI)</a>. Thanks for SenseTime's invitation. Please see more details <a href="https://event.baai.ac.cn/activities/131">here</a>.</li>
<li>(2021-01) <a href="https://openreview.net/forum?id=5jzlpHvvRk">Our paper</a> is accecpted by ICLR2021. The first Autoloss work for object detection. The code is released <a href="https://github.com/PerdonLiu/CSE-Autoloss">here</a>.</li>
</ul>
<h2>Publications</h2>
* denotes equal contribution.
<table style="width:100%">
<!-- <tr>
<td rowspan="3" width='30%'><img src='file/CLIP4Drive.png' width='100%'></img></td>
<td rowspan="3" width='2%'></td>
<td><h4>CLIP4Drive: Pioneering Tail Data Retrieval for Autonomous Driving</h4></td>
</tr> -->
<tr>
<td>
<!-- <p><b style="color:black">Peidong Liu</b>, Jinmin Li, Jie Mei, Bin Xu, Xiaozhi Chen</p> -->
<p>In Submission</p>
</td>
</tr>
</table>
<table style="width:100%">
<tr>
<td rowspan="3" width='30%'><img src='file/SimCC.png' width='100%'></img></td>
<td rowspan="3" width='2%'></td>
<td><h4>SimCC: a Simple Coordinate Classification Perspective for Human Pose Estimation [<a href="https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136660088.pdf">PDF</a>]</h4></td>
</tr>
<tr>
<td>
<p>Yanjie Li, Sen Yang, <b style="color:black">Peidong Liu</b>, Shu-Tao Xia</p>
<p>European Conference on Computer Vision (ECCV), 2022</p>
</td>
</tr>
</table>
<table style="width:100%">
<tr>
<td rowspan="3" width='30%'><img src='file/NEXT_arc.png' width='100%'></img></td>
<td rowspan="3" width='2%'></td>
<td><h4>NeXT: Towards High Quality Neural Radiance Fields via Multi-Skip Transformer [<a href="https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136920069.pdf">PDF</a>]</h4></td>
</tr>
<tr>
<td>
<p>Yunxiao Wang, Yanjie Li, <b style="color:black">Peidong Liu</b>, Tao Dai, Shu-Tao Xia</p>
<p>European Conference on Computer Vision (ECCV), 2022</p>
</td>
</tr>
</table>
<table style="width:100%">
<tr>
<td rowspan="3" width='30%'><img src='file/MTRUB.jpg' width='100%'></img></td>
<td rowspan="3" width='2%'></td>
<td><h4>Multi-task Ranking with User Behaviors for Text-Video Search [<a href="https://dl.acm.org/doi/10.1145/3487553.3524207">PDF</a>]</h4></td>
</tr>
<tr>
<td>
<p><b style="color:black">Peidong Liu</b>, Dongliang Liao, Jinpeng Wang, Yangxin Wu, Gongfu Li, Shu-Tao Xia, Jin Xu</p>
<p>International World Wide Web Conferences (WWW, CCF-A) Companion, 2022</p>
</td>
</tr>
</table>
<table style="width:100%">
<tr>
<td rowspan="3" width='30%'><img src='file/CSE-Autoloss.jpg' width='100%'></img></td>
<td rowspan="3" width='2%'></td>
<td><h4>Loss Function Discovery for Object Detection via Convergence-Simulation Driven Search [<a href="https://openreview.net/forum?id=5jzlpHvvRk">PDF</a>] [<a href="https://event.baai.ac.cn/activities/131">Talk</a>] [<a href="file/cse_poster.pdf">Poster</a>] [<a href="file/cse-ppt.pdf">PPT</a>] [<a href="https://github.com/PerdonLiu/CSE-Autoloss">Code</a>] </h4></td>
</tr>
<tr>
<td>
<p><b style="color:black">Peidong Liu*</b>, Gengwei Zhang*, Bochao Wang, Hang Xu, Xiaodan Liang, Yong Jiang, Zhenguo Li</p>
<p>International Conference on Learning Representations (ICLR), 2021.</p>
</td>
</tr>
</table>
<table style="width:100%">
<tr>
<td rowspan="3" width='30%'><img src='file/MFD.jpg' width='100%'></img></td>
<td rowspan="3" width='2%'></td>
<td><h4>WeClick: Weakly-Supervised Video Semantic Segmentation with Click Annotations [<a href="https://arxiv.org/pdf/2107.03088.pdf">PDF</a>]</h4></td>
</tr>
<tr>
<td>
<p><b style="color:black">Peidong Liu*</b>, Zibin He*, Xiyu Yan*, Yong Jiang, Shu-Tao Xia, Feng Zheng, Maowei Hu</p>
<p>ACM International Conference on Multimedia (ACM MM, CCF-A) <b style="color:black">Oral</b>, 2021.</p>
</td>
</tr>
</table>
<table style="width:100%">
<tr>
<td rowspan="3" width='30%'><img src='file/privacy.jpg' width='100%'></img></td>
<td rowspan="3" width='2%'></td>
<td><h4>Visual Privacy Protection via Mapping Distortion [<a href="https://arxiv.org/pdf/1911.01769v2.pdf">PDF</a>] [<a href="https://github.com/PerdonLiu/Visual-Privacy-Protection-via-Mapping-Distortion">Code</a>]</h4></td>
</tr>
<tr>
<td>
<p>Yiming Li*, <b style="color:black">Peidong Liu*</b>, Yong Jiang, Shu-Tao Xia</p>
<p>International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2021</p>
</td>
</tr>
</table>
<table style="width:100%">
<tr>
<td rowspan="3" width='30%'><img src='file/DFCNet.jpg' width='100%'></img></td>
<td rowspan="3" width='2%'></td>
<td><h4>Deep Flow Collaborative Network for Online Visual Tracking [<a href="./file/icassp2020.pdf">PDF</a>]</h4></td>
</tr>
<tr>
<td>
<p><b style="color:black">Peidong Liu</b>, Xiyu Yan, Yong Jiang, Shu-Tao Xia</p>
<p>International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2020</p>
</td>
</tr>
</table>
<table style="width:100%">
<tr>
<td rowspan="3" width='30%'><img src='file/PW-LDA.png' width='100%'></img></td>
<td rowspan="3" width='2%'></td>
<td><h4>LDA Meets Word2Vec: A Novel Model for Academic Abstract Clustering [<a href="./file/www2018.pdf">PDF</a>]</h4></td>
</tr>
<tr>
<td>
<p>Changzhou Li, Yao Lu, Junfeng Wu, Yongrui Zhang, Zhongzhou Xia, Tianchen Wang, Dantian Yu, Xurui Chen, <b style="color:black">Peidong Liu</b>, Junyu Guo</p>
<p>International World Wide Web Conferences (WWW, CCF-A) Companion, 2018</p>
</td>
</tr>
</table>
<!-- <p>
1. <b>P. Liu.</b>, Xiyu Yan, Yong Jiang, Shu-Tao Xia. Deep Flow Collaborative Network for Online Visual Tracking. ICASSP2020.<a href="./file/icassp2020.pdf">[pdf]</a>
</p>
<p>
2. C. Li, Y. Lu, J. Wu, Y. Zhang, Z. Xia, T. Wang, D. Yu, X. Chen, <b>P. Liu</b>, and J. Guo. LDA Meets Word2Vec: A Novel Model for Academic Abstract Clustering. In The 2018 Web Conference Companion. WWW2018.<a href="./file/www2018.pdf">[pdf]</a>
</p>
<p>
3. <b>P. Liu*</b>, Yiming Li*, Yong Jiang, Shu-Tao Xia. Visual Privacy Protection via Mapping Distortion. arXiv preprint arXiv: 1911.01769.<a href="./file/privacy2020.pdf">[pdf]</a>
</p> -->
<h2>Selected Awards</h2>
<ul>
<li>2024.01 DJI Automotive 2023 Annual Efficiency Vanguard Award</li>
<li>2022.06 University-wise (Top 1%) and Department-wise (Top 5%) Outstanding Graduate at Tsinghua University</li>
<li>2022.06 Outstanding Master's Thesis Award at Tsinghua University (Top 5%)</li>
<li>2021.10 National Scholarship for Postgraduate (Top 1%)</li>
<li>2019.06 Outstanding Graduate of Sun Yat-sen University (Top 3%)</li>
<li>2018.10 Second Class Academic Scholarship of Sun Yat-sen University (Top 8%)</li>
<li>2017.10 Bronze Award in Intel Cup – Parallel Application Challenge (PAC) 2017, China (Top 6%)</li>
<li>2017.10 First Class Academic Scholarship of Sun Yat-sen University (Top 3%)</li>
<li>2017.01 Honorable Mention in Interdisciplinary Contest in Modeling</li>
<li>2016.10 First Class Academic Scholarship of Sun Yat-sen University (Top 3%)</li>
</ul>
<h2>Research Experience in both Academic and Industry</h2>
<table id="tbTeaching" width="100%">
<tbody>
<tr>
<td valign="top" width="16%">2022.07 - till now </td>
<td><b class="Institute">Perception Group, DJI Automotive</b><br>
<b class="Status">Advanced Computer Vision Algorithm Engineer</b><br>
<!-- <li class="Works" margin-left="10000px">The VLA focuses on vertically integrating multimodal large language models into the intelligent driving domain, realizing two types of capabilities: understanding and decision-making for long-tail scenarios and voice control for ambiguous commands. </li> -->
<li class="Works" margin-left="10000px">Responsible for applying Visual-Language-Action (VLA) models in autonomous driving from scratch, which encompasses an extensive survey of open-source data and models, the establishment of data annotation and model fine-tuning pipeline. This includes the design of prompts, LoRA fine-tuning, and the utilization of Deepspeed for multi-node training. By leveraging both open-source and proprietary datasets, the model has been fine-tuned to possess capabilities in perception, decision-making, and planning. The VLA realizes two types of capabilities: understanding and decision-making for long-tail scenarios and voice control for ambiguous commands</li>
<li class="Works" margin-left="10000px">Developed the multimodal large model in image-text retrieval, focusing on mining long-tail data of user interest within massive video databases, with the aim of empowering autonomous driving applications. Specifically, I leverage LLM (Large Language Model) and Diffusion Model to generate synthetic data, augmenting the existing dataset and enhancing the model's performance</li>
<li class="Works" margin-left="10000px">Developed multiple Bird-Eye-View (BEV) lane detection solutions, including temporal BEV, fisheye BEV and road topology, aiming to facilitate the implementation of various solutions in mass production projects <p><b>(I achieved great performance as a result of my accomplishments)</b> </li>
<li class="Works" margin-left="10000px">Optimized Closed-loop lane detection data recycle process, which involves the entire cycle of data collection, feedback, filtering, reconstruction, annotation, etc. This comprehensive approach has greatly improved data recycle efficiency by over 100% through streamlining coordination and collaboration among several modules <b>(I was awarded the 2023 Annual Efficiency Vanguard Award at DJI Automotive due to my outstanding contributions)</b></p> </li>
</tr>
<tr>
<td> </td>
<td> </td>
</tr>
<tr>
<td> </td>
<td> </td>
</tr>
<tr>
<td valign="top" width="16%">2019.09 - 2022.06 </td>
<td><b class="Institute">Department of Computer Science and Technology, Tsinghua University</b><br>
<b class="Status">Master Student</b><br>
<b class="Mentor">Supervisor: Shu-Tao Xia</b><br>
<li class="Works" margin-left="10000px">Proposed Memory Flow Distillation, called MFD, for video semantic segmentation. MFD utilizes weakly-supervised training pattern, optical flow and distillation to alleviate two issues: fine-annotation scarcity and low inference speed. For PSPNet MobileNetV2, MFD increases the performance by 10.24% mIoU and reaches a real-time speed (ACM MM2021 Oral)</li>
<li class="Works" margin-left="10000px">Proposed a Flow Collaborative Network, called DFCNet, for online visual tracking. DFCNet only runs the complex feature network on sparse keyframes, which is selected by raised adaptive keyframe scheduling. DFCNet maximizes the benefits of both feature appearance and temporal information and reaches 30% faster than baseline without compromising accuracy (ICASSP2020)</li>
</tr>
<tr>
<td> </td>
<td> </td>
</tr>
<tr>
<td> </td>
<td> </td>
</tr>
<tr>
<td valign="top" >2021.06 - 2022.05 </td>
<td><b class="Institute">Search Application Department, WeChat Group, Tencent</b><br>
<b class="Status">Computer Vision Algorithm Engineer Intern</b><br>
<li class="Works" margin-left="10000px">To address the challenges of low click-through rates and completion rates in video retrieval for WeChat Channel, as well as the issue of imprecise query-item matching, we defined a new problem: multi-target ranking for video retrieval. By extracting 800k query-document pairs from user interaction logs and employing a multimodal fusion model combined with the MMOE framework as the baseline, the research modeled multiple objectives and achieved a 3% improvement in the average AUC-ROC for each objective</li>
</tr>
<tr>
<td> </td>
<td> </td>
</tr>
<tr>
<td> </td>
<td> </td>
</tr>
<tr>
<td valign="top" >2020.04 - 2021.03 </td>
<td><b class="Institute">Noah's Ark Lab, Huawei</b><br>
<b class="Status">Research Intern</b><br>
<b class="Mentor">Mentor: Xiaodan Liang, Hang Xu, Bochao Wang</b><br>
<li class="Works" margin-left="10000px">Proposed an effective convergence-simulation driven evolutionary search algorithm, called CSE-Autoloss, for object detection loss function discovery, which achieves 20x speedup via progressive convergence-simulation modules (ICLR2021)</li>
</tr>
<tr>
<td> </td>
<td> </td>
</tr>
<tr>
<td> </td>
<td> </td>
</tr>
<tr>
<td valign="top" width="16%">2019.07 - 2019.09 </td>
<td><b class="Institute">Y-Tech AI Lab, Beijing Kuaishou Technology Ltd.</b><br>
<b class="Status">AI Intern</b><br>
<li class="Works" margin-left="10000px">Improved face parsing task with landmarks by around 2% in accuracy on baseline model UNet
</li>
</tr>
<tr>
<td> </td>
<td> </td>
</tr>
<tr>
<td> </td>
<td> </td>
</tr>
<tr>
<td valign="top" width="16%">2018.11 - 2019.06 </td>
<td><b class="Institute">Fundamental Technique Research Group, SenseTime Research</b><br>
<b class="Status">Research Intern</b><br>
<b class="Mentor">Mentor: Litong Feng (Senior Researcher, Ph.D.)</b><br>
<li class="Works" margin-left="10000px">Solely responsible for building the entire pipeline for converting pytorch models to caffe models, including models for classification (Resnet, Inception Resnet series) and Object Detection (SSD, Faster Rcnn), etc.
</li>
</tr>
<tr>
<td> </td>
<td> </td>
</tr>
<tr>
<td> </td>
<td> </td>
</tr>
<!-- <tr>
<td valign="top" width="16%">2018.06 - 2018.08 </td>
<td><b class="Institute">Institute of Information System and Engineering, School of Software, THU</b><br>
<b class="Status">Research Assistant</b><br>
<b class="Mentor">Mentor: Chunping Li (Associate Professor, School of Software, THU)</b><br>
<li class="Works" margin-left="10000px">Implemented the code of several common topic models in NLP area like LDA, GSDMM, HLDA, CTM</li>
<li class="Works">Analyzed Weibo data through topic models above and performed experiments</li></td>
</tr>
<tr>
<td> </td>
<td> </td>
</tr>
<tr>
<td> </td>
<td> </td>
</tr> -->
<tr>
<td width="15%" valign="top">2018.03 - 2018.05 </td>
<td width="85%"><b class="Institute">NUS-Tsinghua Center for Extreme Search(NExT++), NUS, Singapore</b><br>
<b class="Status">Research Assistant</b><br>
<b class="Mentor">Mentor: Zhaoyan Ming (Ph.D., Team Head, NExT++)</b><br>
<li class="Works">Implemented an algorithm to classify Southeast Asian food with complex names and meanings</li>
</tr>
<tr>
<td> </td>
<td> </td>
</tr>
<tr>
<td> </td>
<td> </td>
</tr>
<tr>
<td valign="top" width="16%">2017.10 - 2018.02 </td>
<td><b class="Institute">Smart Mobile Computing Lab, Advanced Networking and Computing Systems Institute, SYSU</b><br>
<b class="Status">Research Assistant</b><br>
<b class="Mentor">Mentor: Xu Chen (Professor, School of Data and Computer Science, SYSU)</b><br>
<li class="Works" margin-left="10000px">Engaged in modeling 30GB articles of WeChat Moment with effective structural features</li>
<li class="Works">Applied Logistic Regression, Random Forest and GBDT to predict the information growth</li></td>
</tr>
<tr>
<td> </td>
<td> </td>
</tr>
<tr>
<td> </td>
<td> </td>
</tr>
<tr>
<td valign="top" width="16%">2017.07 - 2017.10 </td>
<td><b class="Institute">Natural Language Processing Group, Guangdong Province Key Laboratory of Computational Science, SYSU</b><br>
<b class="Status">Research Assistant</b><br>
<b class="Mentor">Mentor: Yao Lu (Professor, School of Data and Computer Science, SYSU)</b><br>
<li class="Works">Participated in text analysis and text mining of medical scientific literature, including preprocessing, word vector representation with Word2Vec, vector dimension reduction with PCA, keywords obtained via TF-IDF, topic number analysis via AP algorithm and article topics obtained via LDA</li>
<li class="Works">Realized parallelization with Spark</li>
<tr>
<td> </td>
<td> </td>
</tr>
<tr>
<td> </td>
<td> </td>
</tr>
<!-- <tr>
<td valign="top" width="16%">2017.03 - 2017.06 </td>
<td><b class="Institute">Research Group of School of Government, SYSU</b><br>
<b class="Status">Team Leader</b><br>
<b class="Mentor">Mentor: Zijie Shao (Associate Researcher, School of Government), Yueping Zheng</b><br>
<li class="Works">Designed web crawler program with Python for data collection of Weibo users’ reposts and comments</li>
<li class="Works">Analyzed Weibo information spread modes through text analysis and data mining</li>
<li class="Works">Built server with Tomcat to store videos for online surveys</li></td>
</tr> -->
</tbody>
</table>
<!-- <h2>Computer Skills</h2>
<ul>
<li>Deep Learning Framework: Pytorch; Caffe</li>
<li>Advanced Programming Languages: C++; Python; C; Java; R</li>
<li>Distributed Framework: Spark</li>
<li>Database: SQL; MongoDB</li>
<li>Web Development Languages: Html; Css; Javascript; Nodejs; Angularjs</li>
<li>Programming Software: Matlab</li>
<li>Others: Onnx; Docker; Linux</li>
</ul> -->
<h2>Academic Service</h2>
Conference Reviewer for AAAI 2022, WWW 2022.
</p>
<h2>Education</h2>
<p>
2019.09 - 2022.06, master student of <a href="http://www.tsinghua.edu.cn/publish/thu2018/index.html" target=_blank>Department of Computer Science and Technology</a> at <a href="http://www.tsinghua.edu.cn/publish/cs/" target=_blank>Tsinghua University</a>
</p>
<p>
2015.09 - 2019.06, undergraduate student of <a href="https://cse.sysu.edu.cn/" target=_blank>School of Computer Science and Engineering</a> at <a href="http://www.sysu.edu.cn/2012/cn/index.htm" target=_blank>Sun Yat-sen University</a>, rank 3/119
</p>
2018.01 - 2018.05, exchange student of <a href="http://www.comp.nus.edu.sg/" target=_blank>School of Computing</a> at <a href="http://www.nus.edu.sg/" target=_blank>National University of Singapore</a>, research intern in <a href="http://www.nextcenter.org/" target=_blank>NExT++ lab</a>
</p>
<div id="footer">
<div id="footer-text"></div>
</div>
</div>
Last updated on Jun 2025. There are <span id="busuanzi_container_site_uv"><span id="busuanzi_value_site_uv"></span> visitors, with <span id="busuanzi_container_site_pv"><span id="busuanzi_value_site_pv"></span>times</span> till now.</span>
</body>
</html>