My name is Jeff Ott, and I am an Engineer turned Data Scientist. I recently graduated from USF's Masters in Data Science program. During this program, I've tackled many topics and projects. I will post what projects I can, but at the university's request, the code will not be readily available unless by specific request. I am interested in computer vision, deep learning, A/B testing, and using Data engineering to reinforce these interests.
Description: In work, I did for the UCSF Brain Lab found here https://github.com/darenma/unet2021, we predicted ADAS Cog, a memory exam given to patients to diagnose their cognitive decline from 3D MRI volumes. This was done with dual semantic segmentations unet to predict both matter types and Desikan Killany's brain atlas parts. Using these data pipelines, I also predicted the Adas Cog score. Below is the results from one of the segmentation pipelines
Libraries Used: Pytorch, torchIO, Pytorch Lightning, Plotly, Sklearn
Parcellation | Raw MRIs |
---|---|
Description: Utilizing a GPT-2 Seq2Seq model, I was able to recreate different podcast characters from the series' critical role. I then added these bots to nodes that walked around and spoke to each other. Their paths can be seen below. The full repository is found here. https://github.com/Jeffotter/BotTown
Libraries Used: Pytorch, Pandas, NLTK, Matplotlib
Walking Characters | Sample Conversation |
---|---|
Description: In this project we made some pyfiles to translate to different data types from the command line
Libraries Used: sys, untangle, xmltodict, json
Description: In this project we implimented a search engine with both linear search and hash table search then compared the differences between the two. We then created a local website on flask allow local users to access and use the engine
Libraries Used: Flask, doc2vec, Regex, Codecs, Numpy
Description: In this project, we processed zipped XML data (44M uncompressed, 9164 files), removed the XML, and tokenized the remaining strings. We developed a workflow that would calculate TFIDF (Term Frequency. inverse document Frequency) for each document.
*Libraries Used: nltk, xml.etree.cElementTree, sklearn.feature_extraction.text, collection, zipfile, string
In this project, I was first introduced to word embeddings in the form of word2vec. I converted all the documents into embedding lists and found the centroids of each document. I then recommended documents based on Euclidean distance. We then use flask, gunicorn, and jninja to build a scaleable website hosted on EC2 on AWS.
*Libraries Used: flask, doc2vec,re,string,numpy, codecs
In this project, I learned how to mine Twitter data and perform sentiment analysis to find the user's average sentiment through a search. I then hosted this website on EC2 and let users search for the average sentiment on any public Twitter handle. This introduced me to website API and classifying sentiment from raw text.
*Libraries Used: flask, tweetie,colour,numpy, tweepy, vadarSentiment
In this project, we attempted to predict median housing prices in California using the unemployment and Mortgage rate as helper variables. We tried three different models on the data ETS, SARIMAX, and FB Prophet. We were able to get a moderately good prediction with an RMSE of $7720. The methods and results are displayed in the Zillow Housing Prediction PDF.
*Libraires Used: Pandas, Numpy, statsmodels, fbprophet, tqdm, sklearn, pmdarmia, matplotlib
I implemented OLS, L2 regularization, and logistic regression in this project. I created functions to normalize the data and compute the loss gradient w/ without regularization. I then utilized these functions to make LogisticRegression, Linear regression, and Ridgeregresison classes.
*Libraries Used: pandas, numpy
In this project, I built a multinomial Naive Bayes classifier to predict whether a movie review was positive or negative. I used Laplace smoothing to deal with missing words and vectorized operations to increase speed. I then used K_fold cross-validation class I coded to train the model and compare it against Sklearn. I was able to achieve an 80% accuracy with this model
*Libraries Used: sklearn, numpy, time, codecs, re
I attempted to recreate Sklearn Decision trees using recursively constructed trees in this project. I implemented LeafNode, DecsionNodes, and Decision tree classes and split using Gini impurity for classification and MSE for regression. I then inherited these classes in my RegressionTree and ClassifierTree functions. I compared these with the Sklearn implementations and got a small margin of error.
*Libraries Used: numpy, scipy.stats, lolviz
Using my decision tree implementation from before. I was tasked with combining these trees with building a random forest. I built RandomForestRegressor and RandomForestClassifier classes. I had to implement bootstrapping, subsampling, Out-of-bag error estimation, and random forest prediction to get comparable accuracy to Sklearn.
*Libraries Used: numpy, scipy.stats, lolviz