I am recruiting students who are interested in the AI-in-Health domain. Please check out this recently funded project. This project is in collaboration with the UC Davis Medical Center, which involves analyzing new datasets (consisting of cardiac output, respiratory rate, and mechanical ventilation waveforms) and developing predictive models. For instance, different types of ventilation change the blood pressure and aortic flow waveforms during fluid boluses, and these waveforms can be used to predict responsiveness to larger fluid boluses. Students who are interested in this opportunity will be trained in health informatics and relevant physiological models. If you are interested in the program, please send an email to firstname.lastname@example.org with the title [EPACC], along with your resume or CV.
Professor Nelson Max is leading a team to develop a quadcopter-based augmented reality video game, in which the players pilot quadcopters “first-person”, viewing an AR game environment through a head-mounted display. The team is seeking a student to continue development of the quadcopter control system using the Robot Operating System (ROS). The student will be responsible for improving the existing control algorithm and interfacing the control algorithm to the Unity game engine to coordinate the real and virtual game experiences. The student will collaborate with other team members responsible for game design and quadcopter localization.
The United Nations Secretariat is currently exploring how public online data can be used to inform international development policies. For example, job market sites can disclose labor market dynamics, retail sites can reveal price barriers for technology diffusion, and social media sites can expose gender inequalities. Professor Hilbert, from UC Davis’ Department of Communication, is currently searching for 2-3 research assistants to accompany him to the UN Secretariat of Latin America and the Caribbean, in Santiago, Chile, for a 3 month consultancy. The UN will cover flight and transport costs, and pay a basic salary to cover living expenses.
The research assistants are expected to program flexible web scraping tools, in Python, for specific online sites and present them through a user-friendly interface, like a Jupyter Notebook, readily usable for data collection by UN Officers. The team will directly work with UN officials in the specification of the scrapers and present their final products at an official UN Conference to high level policy makers and the most important international tech companies, organized in Santiago at the end of March. For more on the project, please see this first publication: “Digital Footprints from Latin America and the Caribbean”.
Piecewise linear functions of several variables are a fundamental structure of optimization and data science. I am looking for Master’s students who are interested in a project to implement efficient geometric data structures for storing and querying piecewise linear functions. The goal is an open source library for Python or Julia. The immediate application is in cutting planes for integer programming, but no prior experience with this topic is necessary.
In this master’s thesis project, students will investigate using today’s chatbot and speech user interface technologies as tools to support second language speakers’ language learning, especially in terms of the acquisition of skills of everyday conversations in a non-native language. The project will take a Human-Computer Interaction perspective, with a focus on employing state-of-art user research methods and interaction design techniques to understand what are the design principles and key factors to consider in designing successful chatbot-based interactions in the given scenarios. We’ll collaborate with natural language faculty to design and evaluate chatbot-based interactions.
John Owens’s research group focuses on GPU computing and has a large project on parallel graph analytics called Gunrock. We have a large need for application development on Gunrock, writing interesting graph applications that use our framework (we have a long list of these from our funding agency). We would hope to train you in GPU computing and in using our framework. This could potentially lead to MS thesis opportunities but also could be a shorter project with an option of switching to another group if interested. We need talented students who can learn quickly and work independently. Funding may be available.
Our team, Gunrock, just won the 2018 Amazon Alexa Prize to build social conversational systems. We won a $500,000 cash award by building the best social bot in the world. You can find more information on Amazon’s website.
We are looking for 2-3 first-year masters students to join our team next year. We would like to recruit students with really strong engineering skills. We are mostly looking for students with at least two years of industry experience. You will be able to join Professor Yu’s group to learn more about natural language processing, machine learning, and AI. You will be able to work collaboratively with other students to build state-of-the-art product-level AI systems.
We are combining machine learning, new sensors, and human-machine interfaces to build a new device to help monitor older people in their homes. We are looking for a CS graduate student who has interests in these areas. This project is a collaboration with the UC Davis School of Medicine and CITRIS.
If interested, please send your CV to Professor Joshi as soon as possible. Partial funding is available.
Building shared understanding and knowledge among group members is known to be crucial to team collaboration. However, transferring knowledge from experts to novices often requires extensive co-located, face-to-face teamwork and tutoring. Remote teamwork that involves members distributed in different locations and times can result in poorer transfer of knowledge due to constraints in telecommunication. In this project, students will explore the system design space of interactive concept map visualization to allow distributed workers to see the domain concepts they possess, as well as compare and link their concepts to those of other people. We aim to improve the design of concept visualization so that the visualization is easy to learn and comprehend, and also allow people to build up shared knowledge through knowledge transfer.
I am looking for more Master’s students to help with our multiplayer augmented reality quadcopter game system. The system includes for each game player a Solo 3DR quadcopter with a mounted GoPro 4 Black video camera, a computer with an NVIDIA GTX 1070 GPU, Oculus Rift VR goggles, Oculus Touch hand held controllers for flying the drone, and wireless communication links. Using markers in the environment, as seen by the video cameras, the computers determine the position of each quadcopter, and use the inertial sensors and quadcopter physics simulation to extrapolate to future frames to decrease VR latency. The games are written in Unity. The quadcopter positions are communicated to the master computer, and are used in the game physics calculations. The master computer receives the user control signals, and either sends them directly to the quadcopter, or modifies them according to the game physics and to avoid collisions. This centralized master server also contain the game logic, such as scoring.
The video camera is fixed on the drone, with a wide angle lens so that the part of the image can be selected appropriate to the user head position and orientation. The computer graphics (CG) augmented elements are added in stereo onto the real video background, also accounting for the user head motion. Thus the game players feel as if they were looking through the windows of a real aircraft at the actual environment in which they are flying. We are using the Oculus Rift software development environment, which allows the video input and computer graphics elements to be supplied on separate layers, with different updates and motion extrapolation parameters. Using the known quadcopter positions, the images of the other quadcopters in the video background can be covered up with stereo CG models, so that they also appear in 3D.
Our initial game was a pong-like paddleball game, with a paddle at each quadcopter, and a virtual ball, which we hope to replace with a third quadcopter. There are game displays showing top down and side views, either on the cockpit dashboard or in a heads-up display on its window, and sound effects when the ball is hit by the paddle, or hits the walls, floor, or ceiling of the game space. Our second game was a maze racing game, where two players start at opposite corners of a two-level maze like track, and attempt to overtake each other.
We are now developing a shooting game, where each player has a gun to shoot opponents, and the controlling computer decides when an opponent has been hit, adding appropriate graphics like fire. The projectiles are shown in stereo CG. When a player’s quadcopter has been disabled, the computer will take control of its flight and bring it to a safe landing. We are also evolving the paddle-ball game into a 3D soccer game, with goal areas on two opposite walls, which will light up when there is a goal.
Aspects of the system development which could lead to Master’s projects are:
Advances in biotechnology have provided researchers with the means to directly study the 3D structure of the genome and its modifications. This information (which is usually represented as a square matrix) can provide detailed information, such as enhancer-promoter interactions and their contribution to gene expression. Recent studies have shown variations which cause disruption of these genome 3D structures can cause various diseases, such as cancer and developmental disorders. We have very recently developed a novel combinatorial method to predict the effect of deletions of the genome 3D structure and score these variations. As part of this project, we want to extend the proposed method to other types of genetic variants (e.g., duplications and inversions). We would utilize both combinatorial approaches and deep learning models for solving this problem. The objective of this project is thus very clear:
If you are interested in this project, please email Dr. Fereydoun Hormozdiari and include your CV/resume.
Predicting gene expression/activity level from sequencing data is an essential problem in biology. In mammalian cells, gene expression is controlled by various biological factors. Recent advances in deep learning have opened a new approach to predict these factors from the available sequencing data. For example, approaches have been developed to use a deep neural network (DNN) to predict enhancer activity data which then can be used to estimate the activity of enhancer and promoter. In another study, authors proposed to use DNNs to predict the three-dimensional structure of genome which can be used to extract the interactions contributing to gene expression activity. In addition, other studies also used DNN and CNN approaches to solve the gene expression prediction problem. Therefore, a natural question here is that if we can utilize these works to develop a novel model by integrating these DNNs to predict the actual gene expression level from the genomic sequence. The objective of this project is thus very clear:
If you are interested in this project, please email Dr. Fereydoun Hormozdiari and include your CV/resume.