I am Andreas and I am a Natural Language Processing engineer, currently working as a Research Assistant at the University of Edinburgh.

My current focus is on Named Entity Recognition and Relation Extraction from biomedical text under the supervision of Beatrice Alex.

In 2017 I finished my Masters at the University of Edinburgh where I specialised in Natural Language Processing and applied Machine Learning.

More specifically, I learned a lot about the dark art of training neural networks and became familiar with the language modelling and dependency parsing literature.

My dissertation, supervised by Adam Lopez and Clara Vania, was on analysing the syntactic structure of sentences using neural network models for languages with rich morphology (demo, code, write up). The results confirmed that for such languages, encoders which construct word representations from characters significantly outperform others that model the input on the word level.

Stack trace

0

After my M.Sc. I was an R&D Data Scientist at Mudano, where I prototyped NLP & ML algorithms supervised by Euan Wielewski.

1

Before my M.Sc. I was a research assistant at NCSR Demokritos under the supervision of Natasa Krithara and George Paliouras. I worked on text classification and extracting relations and entity mentions from streams of text. Our submission placed third at Pan 2015 on the Author Profiling task.

2

Before that I worked on Named Entity Recognition for Greek and Serbian under the supervision of Iraklis Varlamis.

3

I graduated from Harokopio University of Athens in 2014. For my B.Sc. dissertation, supervised by Rania Hatzi, Mara Nikolaidou and Dimosthenis Anagnostopoulos, I constructed a word similarity measure from synonym graphs with the intent of aligning sentences to recognise textual entailment. The word representations I constructed were influenced by the lectures - reading of the book Gödel, Escher, Bach.

4

I did my internship in 2013 at Scify under the supervision of George Giannakopoulos, who got me interested in Natural Language Processing.

More text/code with low perplexity under my language model

My vimrc and other dotfiles can be found here.