diff --git a/_includes/01_research.html b/_includes/01_research.html index 9272d60f..b5c41195 100755 --- a/_includes/01_research.html +++ b/_includes/01_research.html @@ -1,169 +1,130 @@ -

Research -
-
- Read research overview (interpretable modeling) -
-

🔎 My research focuses on how we can build trustworthy machine-learning systems by making them - interpretable. In - my work, interpretability is grounded seriously via close collaboration with domain experts, e.g. - medical - doctors or cell biologists. These collaborations have given rise to useful methodology, roughly split - into two - areas: (1) building more effective transparent models and (2) improving the trustworthiness of - black-box - models. Going forward, I hope to help bridge the gap between transparent models and black-box - models to - improve real-world healthcare. -

-

🌳 Whenever possible, building transparent models is the most effective route towards ensuring - interpretability. - Transparent models are interpretable by design, including models such as (concise) decision trees, rule - lists, - and linear models. My work in this area was largely motivated by the problem of - clinical - decision-rule development. Clinical decision rules (especially those used in emergency - medicine), need - to be extremely transparent so they can be readily audited and used by physicians making split-second - decisions. - To this end, we have developed methodology for enhancing decision trees. For example, replacing the - standard - CART algorithm with a novel greedy algorithm for - tree-sums can - substantially improve predictive performance without sacrificing predictive performance. Additionally, - hierarchical regularization can improve the predictions - of - an already fitted model without altering its interpretability. Despite their effectiveness, transparent - models - such as these often get overlooked in favor of black-box models; to address this issue, we've spent - a lot of - time curating imodels, an open-source package for - fitting - state-of-the-art transparent models. -

-

🌀 My second line of my work focuses on interpreting and improving black-box models, such as - neural - networks, for - the cases when a transparent model simply can't predict well enough. Here, I work closely on - real-world - problems such as analyzing imaging data from cell biology and cosmology. Interpretability in these contexts demands - more - nuanced information than standard notions of "feature importance" common in the literature. As - a result, we - have developed methods to characterize and summarize the interactions in a neural network, particularly in transformed domains (such as the Fourier domain), where - domain interpretations can be more natural. I'm particularly interested in how we can ensure that - these - interpretations are useful, either by using them to embed prior knowledge into a model or - identify when it can be trusted.

-

🤝 There is a lot more work to do on bridging the gap between transparent models and black-box models in - the real - world. One promising avenue is distillation, whereby we can use a black-box model to build a better - transparent - model. For example, in one - work we were able to distill state-of-the-art neural networks in cell-biology and cosmology into - transparent wavelet models with <40 parameters. Despite this huge size reduction, these models - actually improve - prediction performance. By incorporating close domain knowledge into models and the way we approach - problems, I - believe interpretability can help unlock many benefits of machine-learning for improving healthcare and - science. -

-
-
-

-
- -
+

Research

+ + +
+
+ Here are some areas I'm currently excited about. If you want to chat about research (or + are interested in interning at MSR), feel free to reach out over email :)
+ +
🔎 + Interpretability. I'm interested in rethinking + interpretability in the context of LLMs (collaboration with many folks, particularly Rich Caruana). +
+
+ augmented imodels - use LLMs to build a + transparent model
+ imodels - build interpretable models in the style of + scikit-learn
+ explanation penalization - regularize + explanations align models with prior knowledge
+ adaptive + wavelet distillation - replace neural nets with simple, performant wavelet models +
+ +
+ + 🚗 LLM steering. Interpretability tools can provide ways to better guide and use LLMs + (collaboration with many folks, particularly Jack Morris). +
+
+ tree prompting - improve black-box few-shot text classification + with decision trees
+ attention steering - guide LLMs by emphasizing specific input + spans
+ interpretable autoprompting - automatically find fluent + natural-language prompts
+
- - - - - - - - - - - - - - - - - - - - - - - - - - - - +
+ + 🧠 Neuroscience. Since joining MSR, I have been focused on building and applying these methods + to understand how the human brain represents language (using fMRI in collaboration with the Huth lab at UT Austin). +
+
+ summarize & score explanations - generate natural-language + explanations of fMRI encoding models +
+ +
💊 + Healthcare. I'm also actively working in how we can improve clinical decision instruments by using + the information contained across various sources in the medical literature (in collaboration with many folks + including Aaron Kornblith at UCSF and the MSR Health Futures team). +
+
+ clinical self-verification - self-verification improves + performance and interpretability of clinical information extraction
+ clinical rule + vetting - stress testing a clinical decision instrument performance for intra-abdominal injury + +
+
+ +
+ - {% include 00_about.html %} diff --git a/_notes/neuro/comp_neuro.md b/_notes/neuro/comp_neuro.md index d2324aec..7dff70d1 100755 --- a/_notes/neuro/comp_neuro.md +++ b/_notes/neuro/comp_neuro.md @@ -57,7 +57,7 @@ subtitle: Diverse notes on various topics in computational neuro, data-driven ne - retina has on-center / off-surround cells - stimulated by points - then, V1 has differently shaped receptive fields -- *efficient coding hypothesis* - learns different combinations (e.g. lines) that can efficiently represent images +- *efficient coding hypothesis* - brain learns different combinations (e.g. lines) that can efficiently represent images 1. sparse coding (Olshausen and Field, 1996) 2. ICA (Bell and Sejnowski, 1997) @@ -1519,11 +1519,11 @@ the operations above allow for encoding many normal data structures into a singl ## neuro-dl reviews -- https://xcorr.net/2023/01/01/2022-in-review-neuroai-comes-of-age/ +- Blog post ([xcorr, 2023](https://xcorr.net/2023/01/01/2022-in-review-neuroai-comes-of-age/)) - Neuroscience-Inspired Artificial Intelligence ([hassabis et al. 2017](https://www.cell.com/neuron/pdf/S0896-6273(17)30509-3.pdf)) -- Toward next-generation artificial intelligence: catalyzing the NeuroAI revolution ([zador, ...bengio, dicarlo, lecun, ...sejnowski, tsao, 2022](https://arxiv.org/abs/2210.08340)) +- Catalyzing next-generation Artificial Intelligence through NeuroAI ([zador, ...bengio, dicarlo, lecun, ...sejnowski, tsao, 2022](https://arxiv.org/abs/2210.08340)) - Computational language modeling and the promise of in silico experimentation ([jain, vo, wehbe, & huth, 2023](https://direct.mit.edu/nol/article/doi/10.1162/nol_a_00101/114613/Computational-language-modeling-and-the-promise-of)) - 4 experimental design examples @@ -1557,6 +1557,8 @@ the operations above allow for encoding many normal data structures into a singl - Neurocompositional computing: From the Central Paradox of Cognition to a new generation of AI systems ([smolensky, ..., gao, 2022](https://ojs.aaai.org/index.php/aimagazine/article/view/18599)) +- A Path Towards Autonomous Machine Intelligence ([lecun 2022](https://openreview.net/pdf?id=BZ5a1r-kVsf)) + - [Towards NeuroAI: Introducing Neuronal Diversity into Artificial Neural Networks](https://www.semanticscholar.org/paper/Towards-NeuroAI%3A-Introducing-Neuronal-Diversity-Fan-Li/c0aae24f2e250c7d4b5aab608622dbb933f43a4d) (2023) - A rubric for human-like agents andNeuroAI ([momennejad, 2022](https://royalsocietypublishing.org/doi/epdf/10.1098/rstb.2021.0446)): 3 axes - human-like behavior, neural plausibility, & engineering diff --git a/_notes/research_ovws/ovw_interp.md b/_notes/research_ovws/ovw_interp.md index d2335e2c..6ce83644 100755 --- a/_notes/research_ovws/ovw_interp.md +++ b/_notes/research_ovws/ovw_interp.md @@ -809,9 +809,11 @@ Symbolic regression learns a symbolic (e.g. a mathematical formula) for a functi - e.g. scatter plot, meta-model plot, regional VIMs, parametric VIMs - CSM - relative change of model ouput mean when range of $X_i$ is reduced to any subregion - CSV - same thing for variance +- Sparse and Faithful Explanations Without Sparse Models ([sun...wang, rudin, 2024](https://arxiv.org/pdf/2402.09702.pdf)) - introduce sparse explanation value (SEV) - that measure the decision sparsity of a model (defined using movements over a hypercube) - [A Simple and Effective Model-Based Variable Importance Measure](https://arxiv.org/pdf/1805.04755.pdf) - measures the feature importance (defined as the variance of the 1D partial dependence function) of one feature conditional on different, fixed points of the other feature. When the variance is high, then the features interact with each other, if it is zero, they don’t interact. - [Learning to Explain: Generating Stable Explanations Fast - ACL Anthology](https://aclanthology.org/2021.acl-long.415/) (situ et al. 2021) - train a model on "teacher" importance scores (e.g. SHAP) and then use it to quickly predict importance scores on new examples +- Guarantee Regions for Local Explanations ([havasi...doshi-velez, 2024](https://arxiv.org/abs/2402.12737v1)) - use anchor points to find regions for which local interp methods reliably fit the full model ### importance curves diff --git a/_notes/research_ovws/ovw_transformers.md b/_notes/research_ovws/ovw_transformers.md index e1694e9d..2463ba2e 100644 --- a/_notes/research_ovws/ovw_transformers.md +++ b/_notes/research_ovws/ovw_transformers.md @@ -148,59 +148,6 @@ See related papers in the [📌 interpretability](https://csinva.io/notes/resear - Voicebox: Text-Guided Multilingual Universal Speech Generation at Scale ([meta, 2023](https://research.facebook.com/publications/voicebox-text-guided-multilingual-universal-speech-generation-at-scale/)) -## retrieval-augmented-generation (RAG) + tool use - -- private - - https://www.perplexity.ai/ - nice demo adding citation to each fact - - https://you.com - - [langchain](https://github.com/hwchase17/langchain) library - - https://www.fixie.ai/ - provide tools for wrapping APIs in LLM + interaction through router (also default modules for stateful storage, user identity, etc.) -- Augmented Language Models: a Survey ([meta, 2023](https://arxiv.org/abs/2302.07842)) - 3 categories: reasoning, tools, action - - PAL: Program-aided Language Models ([gao...neubig, 2023](https://arxiv.org/abs/2211.10435)) - - Demonstrate-Search-Predict: Composing retrieval and language models for knowledge-intensive NLP ([khattab, ..., liang, potts, & zaharia, 2022](https://arxiv.org/abs/2212.14024)) - use high-level programs to use multiple steps between retrieving and reading -- Infer–Retrieve–Rank: In-Context Learning for Extreme Multi-Label Classification ([D’Oosterlinck, ..., potts, 2024](https://arxiv.org/pdf/2401.12178.pdf)) - 1. Infer: an LM processes the input document and guesses a set of applicable terms - 2. Retrieve: a retriever relates each predicted term to the actual label space - 3. Rank: Finally, an LM is used to rerank retrieved labels -- Toolformer: Language Models Can Teach Themselves to Use Tools ([meta, 2023](https://arxiv.org/abs/2302.04761)) - model trained to decide which APIs to call, when to call them, what arguments to pass, and how to best incorporate the results into future token prediction - - Given input, sample position and API call candidates, try them all, and filter out ones which do not reduce next-token loss - - put correct API calls into prompt, e.g. Pittsburgh is also known as `[QA(What ...?→ Steel City)]` the Steel City. - - - Training - - start with few human-written examples of API use - - LLM generates more uses - - self-supervised loss determines which calls help with future-token prediction - - - Atlas: Few-shot Learning with Retrieval Augmented Language Models ([meta, 2022](https://arxiv.org/abs/2208.03299)) - -- Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection ([asai...hajishirzi, 2023](https://arxiv.org/abs/2310.11511)) - train an LM that adaptively retrieves passages on-demand, and generates and reflects on retrieved passages and its own generations using special tokens, called reflection token -- Active RAG ([jiang...neubig, 2023](https://arxiv.org/abs/2305.06983)) - propose FLARE, which iteratively uses a prediction of the upcoming sentence to anticipate future content, which is then utilized as a query to retrieve relevant documents to regenerate the sentence if it contains low-confidence tokens -- retreival-augmented in-context learning (put retrieved info into context, or something very similar) - - REALM ([guu, ..., chang, 2020](https://arxiv.org/abs/2002.08909)) - retrieves document chunks from corpus and adds them to context, for open-domain QA - - RETRO ([deepmind, 2022](https://arxiv.org/abs/2112.04426)) - nearest neighbors to model's input are retrieved, encoded, and conditioned on with chunked cross-attention - - Decomposed prompting ([khot et al., 2022](https://arxiv.org/pdf/2210.02406.pdf)) - decompose tasks via prompting which are delegated to a shared library of prompting-based LLMs dedicated to these sub-tasks - - LLM-Augmenter ([peng, galley...gao, 2023](https://arxiv.org/abs/2302.12813)) - (1) consolidates evidence from external knowledge for the LLM to generate responses grounded in evidence, and (2) revising LLM’s (candidate) responses using automated feedback - - Knowledgeable Prompt-tuning ([Hu et al. 2021](https://arxiv.org/abs/2108.02035)) - add knowledge-base info into the prompt search -- memorizing transformers ([wu...szegedy, 2022](https://arxiv.org/abs/2203.08913)) - knn-based learned indexing + retrieval at training time - - at test time, you just need to index the entire context and the model will be able to use it - - kNN Prompting: Learning Beyond the Context with Nearest Neighbor Inference ([xu...zhang, 2023](https://openreview.net/forum?id=fe2S7736sNS)) - instead of verbalizer, use nearest-neighbor - - has dbpedia results - - kNN-Prompt: Nearest Neighbor Zero-Shot Inference ([shi...zettlemoyer, 2022](https://arxiv.org/pdf/2205.13792.pdf)) -- original - - ACT-1: Transformer for Actions ([2022, adept](https://www.adept.ai/act)) - transformer directly interacts with computer - - ReAct: Synergizing Reasoning and Acting in Language Models ([yao...cao, 2022](https://arxiv.org/abs/2210.03629)) - use LLMs to generate reasoning traces + task-specific actions in interleaved manner - - RLPG ([shrivastava, larochelle, & tarlow, 2022](https://arxiv.org/abs/2206.12839)) - for code-completion, retrieves functions from a repo - - knowledge base triplets - - Relational Memory-Augmented Language Models ([liu, yogatama, & blunsom, 2022](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00476/110997/Relational-Memory-Augmented-Language-Models)) - integrate knowledge base triplets with LLM - - DRAGON: Deep Bidirectional Language-Knowledge Graph Pretraining ([yasanaga, ..., manning, liang, leskovec, 2022](https://arxiv.org/abs/2210.09338)) - -- webgpt ([nakano, ..., schulman, 2022, OpenAI](https://arxiv.org/abs/2112.09332)) - allows google search to add world info - - Internet-augmented language models ([Lazaridou et al., 2022](https://arxiv.org/pdf/2203.05115.pdf)) - - GopherCite ([menick, ..., mcaleese, 2022, Deepmind](https://arxiv.org/abs/2203.11147)) - generate answers + link/relevant snippet when making predictions (trained with RL from human preferences ) - - LaMDA ([thoppilan, ..., quoc le, 2022, google](https://arxiv.org/abs/2201.08239)) - allows google search to add world info (in a dialog model) - - this was the model that sparked the controversy about consciousness 🤔 - - A Neural Corpus Indexer for Document Retrieval ([wang...yang, 2022](https://arxiv.org/abs/2206.02743)) - train model to directly spit out document IDs given queries - # prompting - https://github.com/dair-ai/Prompt-Engineering-Guide @@ -446,6 +393,56 @@ See related papers in the [📌 interpretability](https://csinva.io/notes/resear # misc +## retrieval-augmented-generation (RAG) + tool use + +- private + - https://www.perplexity.ai/ - nice demo adding citation to each fact + - https://you.com + - [langchain](https://github.com/hwchase17/langchain) library + - https://www.fixie.ai/ - provide tools for wrapping APIs in LLM + interaction through router (also default modules for stateful storage, user identity, etc.) +- Augmented Language Models: a Survey ([meta, 2023](https://arxiv.org/abs/2302.07842)) - 3 categories: reasoning, tools, action + - PAL: Program-aided Language Models ([gao...neubig, 2023](https://arxiv.org/abs/2211.10435)) + - Demonstrate-Search-Predict: Composing retrieval and language models for knowledge-intensive NLP ([khattab, ..., liang, potts, & zaharia, 2022](https://arxiv.org/abs/2212.14024)) - use high-level programs to use multiple steps between retrieving and reading +- Infer–Retrieve–Rank: In-Context Learning for Extreme Multi-Label Classification ([D’Oosterlinck, ..., potts, 2024](https://arxiv.org/pdf/2401.12178.pdf)) + 1. Infer: an LM processes the input document and guesses a set of applicable terms + 2. Retrieve: a retriever relates each predicted term to the actual label space + 3. Rank: Finally, an LM is used to rerank retrieved labels +- Toolformer: Language Models Can Teach Themselves to Use Tools ([meta, 2023](https://arxiv.org/abs/2302.04761)) - model trained to decide which APIs to call, when to call them, what arguments to pass, and how to best incorporate the results into future token prediction + - Given input, sample position and API call candidates, try them all, and filter out ones which do not reduce next-token loss + - put correct API calls into prompt, e.g. Pittsburgh is also known as `[QA(What ...?→ Steel City)]` the Steel City. + - Training + - start with few human-written examples of API use + - LLM generates more uses + - self-supervised loss determines which calls help with future-token prediction + - Atlas: Few-shot Learning with Retrieval Augmented Language Models ([meta, 2022](https://arxiv.org/abs/2208.03299)) +- Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection ([asai...hajishirzi, 2023](https://arxiv.org/abs/2310.11511)) - train an LM that adaptively retrieves passages on-demand, and generates and reflects on retrieved passages and its own generations using special tokens, called reflection token +- Active RAG ([jiang...neubig, 2023](https://arxiv.org/abs/2305.06983)) - propose FLARE, which iteratively uses a prediction of the upcoming sentence to anticipate future content, which is then utilized as a query to retrieve relevant documents to regenerate the sentence if it contains low-confidence tokens +- retreival-augmented in-context learning (put retrieved info into context, or something very similar) + - REALM ([guu, ..., chang, 2020](https://arxiv.org/abs/2002.08909)) - retrieves document chunks from corpus and adds them to context, for open-domain QA + - RETRO ([deepmind, 2022](https://arxiv.org/abs/2112.04426)) - nearest neighbors to model's input are retrieved, encoded, and conditioned on with chunked cross-attention + - Decomposed prompting ([khot et al., 2022](https://arxiv.org/pdf/2210.02406.pdf)) - decompose tasks via prompting which are delegated to a shared library of prompting-based LLMs dedicated to these sub-tasks + - LLM-Augmenter ([peng, galley...gao, 2023](https://arxiv.org/abs/2302.12813)) - (1) consolidates evidence from external knowledge for the LLM to generate responses grounded in evidence, and (2) revising LLM’s (candidate) responses using automated feedback + - Knowledgeable Prompt-tuning ([Hu et al. 2021](https://arxiv.org/abs/2108.02035)) - add knowledge-base info into the prompt search +- memorizing transformers ([wu...szegedy, 2022](https://arxiv.org/abs/2203.08913)) - knn-based learned indexing + retrieval at training time + - at test time, you just need to index the entire context and the model will be able to use it + - kNN Prompting: Learning Beyond the Context with Nearest Neighbor Inference ([xu...zhang, 2023](https://openreview.net/forum?id=fe2S7736sNS)) - instead of verbalizer, use nearest-neighbor + - has dbpedia results + - kNN-Prompt: Nearest Neighbor Zero-Shot Inference ([shi...zettlemoyer, 2022](https://arxiv.org/pdf/2205.13792.pdf)) +- original + - ACT-1: Transformer for Actions ([2022, adept](https://www.adept.ai/act)) - transformer directly interacts with computer + - ReAct: Synergizing Reasoning and Acting in Language Models ([yao...cao, 2022](https://arxiv.org/abs/2210.03629)) - use LLMs to generate reasoning traces + task-specific actions in interleaved manner + - RLPG ([shrivastava, larochelle, & tarlow, 2022](https://arxiv.org/abs/2206.12839)) - for code-completion, retrieves functions from a repo + - knowledge base triplets + - Relational Memory-Augmented Language Models ([liu, yogatama, & blunsom, 2022](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00476/110997/Relational-Memory-Augmented-Language-Models)) - integrate knowledge base triplets with LLM + - DRAGON: Deep Bidirectional Language-Knowledge Graph Pretraining ([yasanaga, ..., manning, liang, leskovec, 2022](https://arxiv.org/abs/2210.09338)) + +- webgpt ([nakano, ..., schulman, 2022, OpenAI](https://arxiv.org/abs/2112.09332)) - allows google search to add world info + - Internet-augmented language models ([Lazaridou et al., 2022](https://arxiv.org/pdf/2203.05115.pdf)) + - GopherCite ([menick, ..., mcaleese, 2022, Deepmind](https://arxiv.org/abs/2203.11147)) - generate answers + link/relevant snippet when making predictions (trained with RL from human preferences ) + - LaMDA ([thoppilan, ..., quoc le, 2022, google](https://arxiv.org/abs/2201.08239)) - allows google search to add world info (in a dialog model) + - this was the model that sparked the controversy about consciousness 🤔 + - A Neural Corpus Indexer for Document Retrieval ([wang...yang, 2022](https://arxiv.org/abs/2206.02743)) - train model to directly spit out document IDs given queries + ## adaptation / transfer *These are transformer-specific. For more general notes, see [📌 transfer learning](https://csinva.io/notes/research_ovws/ovw_transfer_learning.html) or [📌 uncertainty](https://csinva.io/notes/research_ovws/ovw_transfer_learning.html).* Most of these approaches can be combined with metalearning. @@ -531,7 +528,7 @@ Model merging (some of these are non-transformer papers) = combine different mod - standard methods (see [mergekit package](https://github.com/arcee-ai/mergekit)) 1. linear averaging, e.g. model soups ([wortsman...schmidt, 2021](https://proceedings.mlr.press/v162/wortsman22a.html)) 2. spherical linear interpolation - interpolate angle but keep norm constant - 3. TIES: Resolving Interference When Merging Models ([yadav...raffel, bansal, 2023](https://arxiv.org/abs/2306.01708)) - + 3. TIES: Resolving Interference When Merging Models ([yadav...raffel, bansal, 2023](https://arxiv.org/abs/2306.01708)) 1. only keep top-k% most significant changes in weights 2. vote on signs of parameters 4. DARE ([yu...li 2023](https://arxiv.org/abs/2311.03099)) @@ -574,14 +571,16 @@ Model merging (some of these are non-transformer papers) = combine different mod ## editing - - Tell Your Model Where to Attend: Post-hoc Attention Steering for LLMs ([zhang, singh, liu, liu, yu, gao, zhao, 2023](https://arxiv.org/abs/2311.02262)) - upweight attention scores at specific positions to improve LLM controllability - - Editing Large Language Models: Problems, Methods, and Opportunities ([yao, ..., zhang, 2023](https://arxiv.org/pdf/2305.13172.pdf)) +Editing is generally very similar to just adaptation/finetuning. One distinction is that it tends to try to keep changes localized, in an effort not to affect performance for most of the model. + +- Tell Your Model Where to Attend: Post-hoc Attention Steering for LLMs ([zhang, singh, liu, liu, yu, gao, zhao, 2023](https://arxiv.org/abs/2311.02262)) - upweight attention scores at specific positions to improve LLM controllability +- Editing Large Language Models: Problems, Methods, and Opportunities ([yao, ..., zhang, 2023](https://arxiv.org/pdf/2305.13172.pdf)) - model-editing = data-efficient alterations to a model - - memory-based +- memory-based - SERAC: Memory-Based Model Editing at Scale ([mitchell...manning, finn, 2022](https://proceedings.mlr.press/v162/mitchell22a/mitchell22a.pdf)) - keep track of list of edits in external memory and use them as appropriate context at test time (don't finetune the model) - T-Patcher (Huang et al., 2023) and CaliNET (Dong et al., 2022) introduce extra trainable parameters into the feed- forward module of PLMs - - weight updates +- weight updates - Knowledge Neurons in Pretrained Transformers ([dai et al. 2021](https://arxiv.org/abs/2104.08696)) - integrated gradients wrt to each neuron in BERT, then selectively udpate these neurons - ROME: Locating and Editing Factual Associations in GPT ([meng, bau et al. 2022](https://arxiv.org/abs/2202.05262) ) - *localize factual associations* - causal intervention for identifying neuron activations that are decisive in a model’s factual predictions @@ -591,12 +590,13 @@ Model merging (some of these are non-transformer papers) = combine different mod - MEMIT: Mass Editing Memory in a Transformer ([meng..., bau, 2022](https://memit.baulab.info/)) - Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adapters ([hartvigsen, ..., palangi, ..., ghassemi, 2023](https://arxiv.org/abs/2211.11031v3)) - Flexible Model Interpretability through Natural Language Model Editing ([d'oosterlinck, ..., potts, 2023](https://arxiv.org/abs/2311.10905)) + - Model Editing with Canonical Examples ([hewitt, ..., liang, manning, 2024](https://arxiv.org/abs/2402.06155)) - meta-learning - KnowledgeEditor: Editing Factual Knowledge in Language Models ([de cao, aziz, & titov, 2021](https://arxiv.org/pdf/2104.08164.pdf)) - train a network that takes in input, output, edit and predicts a weight update to the model - MEND: Fast model editing at scale ([mitchell...finn, manning, 2022](https://arxiv.org/abs/2110.11309)) - a collection of small auxiliary editing networks that use a single desired input-output pair to edit a pre-trained model - MEND learns to transform the gradient obtained by standard fine-tuning, using a low-rank decomposition of the gradient - - REMEDI ([hernandez, li, & andreas, 2023](https://arxiv.org/pdf/2304.00740.pdf)) and related activation engineering +- REMEDI ([hernandez, li, & andreas, 2023](https://arxiv.org/pdf/2304.00740.pdf)) and related activation engineering - get "edit vectors" by obtaining embeddings when passing attributes through LLM - perform edit by by adding linear transformation of edit vector to prompt embedding - then, perform generation with latent embedding @@ -609,12 +609,19 @@ Model merging (some of these are non-transformer papers) = combine different mod - Extracting Latent Steering Vectors from Pretrained Language Models ([subramani, ..., peters, 2022](https://arxiv.org/abs/2205.05124)) - find latent vectors via optimization that cause an LLM to output a particular sequence - then, use these vectors to do things like transfer to new tasks / compute textual similarity - Function Vectors in Large Language Models ([todd...wallace, bau, 2023](https://arxiv.org/pdf/2310.15213.pdf)) - - - PURR: Efficiently Editing Language Model Hallucinations by Denoising Language Model Corruptions ([chen...sameer singh...kelvin guu, 2023](https://drive.google.com/file/d/1CXSUii4w8Y2uj-zLm8zRl63SYh45FaZL/view)) - - new datasets + +- PURR: Efficiently Editing Language Model Hallucinations by Denoising Language Model Corruptions ([chen...sameer singh...kelvin guu, 2023](https://drive.google.com/file/d/1CXSUii4w8Y2uj-zLm8zRl63SYh45FaZL/view)) +- new datasets - MQUAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions ([zhong...manning, potts, chen, 2023](https://www.cs.princeton.edu/~zzhong/papers/MQuAKE.pdf)) - introduces benchmark MQUAKE + method MeLLo, which stores edited facts externally while prompting the language model iteratively to generate answers that are consistent with the edited facts - [COUNTERFACT+ benchmark](https://arxiv.org/pdf/2305.17553.pdf) - checks that edits don’t affect existing info - [ALMANACS](https://arxiv.org/abs/2312.12747): A Simulatability Benchmark for Language Model Explainability +- model unlearning approaches (see review Rethinking Machine Unlearning for Large Language Models, [liu et al. 2024](https://arxiv.org/abs/2402.08787)) + - gradient ascent - worsen performance on set of examples to forget + - gradient descent - improve performance on examples labeled with hidden info, e.g. response "I don't know" + - localization-informed unlearning, e.g. ROME + - influence function-based methods + - prompt-based (e.g. only change prompt rather than model parameters) + ## direct weight inspection @@ -715,6 +722,7 @@ Model merging (some of these are non-transformer papers) = combine different mod - Codebook Features: Sparse and Discrete Interpretability for Neural Networks ([tamkin, taufeeque, & goodman, 2023](https://arxiv.org/abs/2310.17230)) - Patchscope ([ghandeharioun...geva, 2023](https://arxiv.org/abs/2401.06102)) - decode LLM's representation of a token by asking another copy of it to decode from that same representation - Program synthesis via mechanistic interpretability ([michaud...tegmark](https://arxiv.org/abs/2402.05110)) - condense RNN on simple algorithmic tasks into code +- Linear Representations of Sentiment in Large Language Models ([tigges...nanda, 2023](https://arxiv.org/abs/2310.15154)) - sentiment is distributed across tokens (not just at sentiment-laden words) ## debugging / interpretation @@ -1016,6 +1024,8 @@ mixture of experts models have become popular because of the need for (1) fast s - What Algorithms can Transformers Learn? A Study in Length Generalization ([zhou...bengio, nakkiran, 2023](https://arxiv.org/abs/2310.16028)) - Transformers tend to length generalize on a task if the task can be solved by a short RASP program which works for all input lengthsr - Understanding In-Context Learning in Transformers and LLMs by Learning to Learn Discrete Functions ([bhattamishra...varun kanade, 2023](https://arxiv.org/abs/2310.03016)) - on boolean functions, transformers can learn to match optimal aglorithms for simple tasks but not on complex tasks - Transformers can learn to implement two distinct algorithms to solve a single task, and can adaptively select the more sample-efficient algorithm depending on the sequence of in-context examples + - Limits of Transformer Language Models on Learning Algorithmic Compositions ([thomm...scholkopf, rahimi, 2024](https://arxiv.org/pdf/2402.05785.pdf)) + - Dissecting Chain-of-Thought: Compositionality through In-Context Filtering and Learning ([li...papailiopoulos, oymak, 2023](https://openreview.net/forum?id=xEhKwsqxMa)) - CoT helps LLMs learn MLP compositional functions in-context - Learning a (sparse) linear model - The contextual lasso: Sparse linear models via deep neural networks ([thompson, …, kohn, 2023](https://arxiv.org/pdf/2302.00878.pdf)) - very rough results... - [Breaking the Paradox of Explainable Deep Learning](https://arxiv.org/abs/2305.13072) @@ -1129,12 +1139,15 @@ mixture of experts models have become popular because of the need for (1) fast s - neurips 2023 [tabular workshop](https://table-representation-learning.github.io) and [review](https://arxiv.org/abs/2402.05121) from feb 4 2024 - value string methods - directly treating numerical values as strings and finetune GPT on them (everything is represented as text) + - LIFT: Language-Interfaced Fine-Tuning for Non-Language Machine Learning Tasks ([dinh...lee, 2022](https://arxiv.org/abs/2206.06565)) - GreaT ([Borisov et al., 2022](https://openreview.net/forum?id=cEygmQNOeI)) - augmenting a sample with copies of different feature permutations - TapTap ([Zhang et al., 2023](https://arxiv.org/abs/2305.09696)) - Table-GPT ([li...chaudhuri, 2023](https://arxiv.org/pdf/2310.09263.pdf)) - TabFMs: Towards Foundation Models for Learning on Tabular Data ([zhang...bian, 2023](https://arxiv.org/abs/2310.07338)) - unified text - TableLlama: Towards Open Large Generalist Models for Tables ([zhang...sun, 2023](https://arxiv.org/pdf/2311.09206.pdf)) + - OmniPred: Language Models as Universal Regressors ([song...chen, 2024](https://arxiv.org/abs/2402.14547)) - metalearn on huge number of regression problems from Google Vizier + - do not use text tokens - TabDDPM: Modelling Tabular Data with Diffusion Models ([kotelnikov...babenko 2022](https://arxiv.org/abs/2209.15421)) - main eval: downstream ML model performance @@ -1167,7 +1180,7 @@ mixture of experts models have become popular because of the need for (1) fast s - Language models are weak learners ([manikandan, jian, & kolter, 2023](https://arxiv.org/abs/2306.14101)) - use prompted LLMs as weak learners in boosting algorithm for tabular data - TabRet: Pre-training Transformer-based Tabular Models for Unseen Columns ([onishi...hayashi, 2023](https://arxiv.org/abs/2303.15747)) - AnyPredict: A Universal Tabular Prediction System Based on Large Language Models https://openreview.net/forum?id=icuV4s8f2c - converting tabular data into machine-understandable prompts and fine-tuning LLMs to perform accurate predictions - + - interpretability - InterpreTabNet: Enhancing Interpretability of Tabular Data Using Deep Generative Models and Large Language Model ([si...krishnan, 2023](https://openreview.net/pdf?id=kzR5Cj5blw)) - make attention sparse and describe it with GPT4 diff --git a/assets/css/chandan_tables.css b/assets/css/chandan_tables.css index 1f07d22e..4db066f9 100755 --- a/assets/css/chandan_tables.css +++ b/assets/css/chandan_tables.css @@ -22,18 +22,24 @@ input { select { - background: #000; + background: #a4a4a4; border:1px; } .dataTables_wrapper .dataTables_paginate .paginate_button{ background-color: #111; + color: white; } thead, th {text-align: center;} .dataTables_info{ - color: #ffffff; + color: #e5e5e5 !important; +} + +.dataTables_wrapper .dataTables_paginate .paginate_button { + background-color: #2c2c2c !important; + color: #ffffff !important; } label { diff --git a/assets/css/font-awesome/css/font-awesome.css b/assets/css/font-awesome/css/font-awesome.css deleted file mode 100755 index 4040b3cf..00000000 --- a/assets/css/font-awesome/css/font-awesome.css +++ /dev/null @@ -1,1672 +0,0 @@ -/*! - * Font Awesome 4.2.0 by @davegandy - http://fontawesome.io - @fontawesome - * License - http://fontawesome.io/license (Font: SIL OFL 1.1, CSS: MIT License) - */ -/* FONT PATH - * -------------------------- */ -@font-face { - font-family: 'FontAwesome'; - src: url('../fonts/fontawesome-webfont.eot?v=4.2.0'); - src: url('../fonts/fontawesome-webfont.eot?#iefix&v=4.2.0') format('embedded-opentype'), url('../fonts/fontawesome-webfont.woff?v=4.2.0') format('woff'), url('../fonts/fontawesome-webfont.ttf?v=4.2.0') format('truetype'), url('../fonts/fontawesome-webfont.svg?v=4.2.0#fontawesomeregular') format('svg'); - font-weight: normal; - font-style: normal; -} -.fa { - display: inline-block; - font: normal normal normal 14px/1 FontAwesome; - font-size: inherit; - text-rendering: auto; - -webkit-font-smoothing: antialiased; - -moz-osx-font-smoothing: grayscale; -} -/* makes the font 33% larger relative to the icon container */ -.fa-lg { - font-size: 1.33333333em; - line-height: 0.75em; - vertical-align: -15%; -} -.fa-2x { - font-size: 2em; -} -.fa-3x { - font-size: 3em; -} -.fa-4x { - font-size: 4em; -} -.fa-5x { - font-size: 5em; -} -.fa-fw { - width: 1.28571429em; - text-align: center; -} -.fa-ul { - padding-left: 0; - margin-left: 2.14285714em; - list-style-type: none; -} -.fa-ul > li { - position: relative; -} -.fa-li { - position: absolute; - left: -2.14285714em; - width: 2.14285714em; - top: 0.14285714em; - text-align: center; -} -.fa-li.fa-lg { - left: -1.85714286em; -} -.fa-border { - padding: .2em .25em .15em; - border: solid 0.08em #eeeeee; - border-radius: .1em; -} -.pull-right { - float: right; -} -.pull-left { - float: left; -} -.fa.pull-left { - margin-right: .3em; -} -.fa.pull-right { - margin-left: .3em; -} -.fa-spin { - -webkit-animation: fa-spin 2s infinite linear; - animation: fa-spin 2s infinite linear; -} -@-webkit-keyframes fa-spin { - 0% { - -webkit-transform: rotate(0deg); - transform: rotate(0deg); - } - 100% { - -webkit-transform: rotate(359deg); - transform: rotate(359deg); - } -} -@keyframes fa-spin { - 0% { - -webkit-transform: rotate(0deg); - transform: rotate(0deg); - } - 100% { - -webkit-transform: rotate(359deg); - transform: rotate(359deg); - } -} -.fa-rotate-90 { - filter: progid:DXImageTransform.Microsoft.BasicImage(rotation=1); - -webkit-transform: rotate(90deg); - -ms-transform: rotate(90deg); - transform: rotate(90deg); -} -.fa-rotate-180 { - filter: progid:DXImageTransform.Microsoft.BasicImage(rotation=2); - -webkit-transform: rotate(180deg); - -ms-transform: rotate(180deg); - transform: rotate(180deg); -} -.fa-rotate-270 { - filter: progid:DXImageTransform.Microsoft.BasicImage(rotation=3); - -webkit-transform: rotate(270deg); - -ms-transform: rotate(270deg); - transform: rotate(270deg); -} -.fa-flip-horizontal { - filter: progid:DXImageTransform.Microsoft.BasicImage(rotation=0, mirror=1); - -webkit-transform: scale(-1, 1); - -ms-transform: scale(-1, 1); - transform: scale(-1, 1); -} -.fa-flip-vertical { - filter: progid:DXImageTransform.Microsoft.BasicImage(rotation=2, mirror=1); - -webkit-transform: scale(1, -1); - -ms-transform: scale(1, -1); - transform: scale(1, -1); -} -:root .fa-rotate-90, -:root .fa-rotate-180, -:root .fa-rotate-270, -:root .fa-flip-horizontal, -:root .fa-flip-vertical { - filter: none; -} -.fa-stack { - position: relative; - display: inline-block; - width: 2em; - height: 2em; - line-height: 2em; - vertical-align: middle; -} -.fa-stack-1x, -.fa-stack-2x { - position: absolute; - left: 0; - width: 100%; - text-align: center; -} -.fa-stack-1x { - line-height: inherit; -} -.fa-stack-2x { - font-size: 2em; -} -.fa-inverse { - color: #ffffff; -} -/* Font Awesome uses the Unicode Private Use Area (PUA) to ensure screen - readers do not read off random characters that represent icons */ -.fa-glass:before { - content: "\f000"; -} -.fa-music:before { - content: "\f001"; -} -.fa-search:before { - content: "\f002"; -} -.fa-envelope-o:before { - content: "\f003"; -} -.fa-heart:before { - content: "\f004"; -} -.fa-star:before { - content: "\f005"; -} -.fa-star-o:before { - content: "\f006"; -} -.fa-user:before { - content: "\f007"; -} -.fa-film:before { - content: "\f008"; -} -.fa-th-large:before { - content: "\f009"; -} -.fa-th:before { - content: "\f00a"; -} -.fa-th-list:before { - content: "\f00b"; -} -.fa-check:before { - content: "\f00c"; -} -.fa-remove:before, -.fa-close:before, -.fa-times:before { - content: "\f00d"; -} -.fa-search-plus:before { - content: "\f00e"; -} -.fa-search-minus:before { - content: "\f010"; -} -.fa-power-off:before { - content: "\f011"; -} -.fa-signal:before { - content: "\f012"; -} -.fa-gear:before, -.fa-cog:before { - content: "\f013"; -} -.fa-trash-o:before { - content: "\f014"; -} -.fa-home:before { - content: "\f015"; -} -.fa-file-o:before { - content: "\f016"; -} -.fa-clock-o:before { - content: "\f017"; -} -.fa-road:before { - content: "\f018"; -} -.fa-download:before { - content: "\f019"; -} -.fa-arrow-circle-o-down:before { - content: "\f01a"; -} -.fa-arrow-circle-o-up:before { - content: "\f01b"; -} -.fa-inbox:before { - content: "\f01c"; -} -.fa-play-circle-o:before { - content: "\f01d"; -} -.fa-rotate-right:before, -.fa-repeat:before { - content: "\f01e"; -} -.fa-refresh:before { - content: "\f021"; -} -.fa-list-alt:before { - content: "\f022"; -} -.fa-lock:before { - content: "\f023"; -} -.fa-flag:before { - content: "\f024"; -} -.fa-headphones:before { - content: "\f025"; -} -.fa-volume-off:before { - content: "\f026"; -} -.fa-volume-down:before { - content: "\f027"; -} -.fa-volume-up:before { - content: "\f028"; -} -.fa-qrcode:before { - content: "\f029"; -} -.fa-barcode:before { - content: "\f02a"; -} -.fa-tag:before { - content: "\f02b"; -} -.fa-tags:before { - content: "\f02c"; -} -.fa-book:before { - content: "\f02d"; -} -.fa-bookmark:before { - content: "\f02e"; -} -.fa-print:before { - content: "\f02f"; -} -.fa-camera:before { - content: "\f030"; -} -.fa-font:before { - content: "\f031"; -} -.fa-bold:before { - content: "\f032"; -} -.fa-italic:before { - content: "\f033"; -} -.fa-text-height:before { - content: "\f034"; -} -.fa-text-width:before { - content: "\f035"; -} -.fa-align-left:before { - content: "\f036"; -} -.fa-align-center:before { - content: "\f037"; -} -.fa-align-right:before { - content: "\f038"; -} -.fa-align-justify:before { - content: "\f039"; -} -.fa-list:before { - content: "\f03a"; -} -.fa-dedent:before, -.fa-outdent:before { - content: "\f03b"; -} -.fa-indent:before { - content: "\f03c"; -} -.fa-video-camera:before { - content: "\f03d"; -} -.fa-photo:before, -.fa-image:before, -.fa-picture-o:before { - content: "\f03e"; -} -.fa-pencil:before { - content: "\f040"; -} -.fa-map-marker:before { - content: "\f041"; -} -.fa-adjust:before { - content: "\f042"; -} -.fa-tint:before { - content: "\f043"; -} -.fa-edit:before, -.fa-pencil-square-o:before { - content: "\f044"; -} -.fa-share-square-o:before { - content: "\f045"; -} -.fa-check-square-o:before { - content: "\f046"; -} -.fa-arrows:before { - content: "\f047"; -} -.fa-step-backward:before { - content: "\f048"; -} -.fa-fast-backward:before { - content: "\f049"; -} -.fa-backward:before { - content: "\f04a"; -} -.fa-play:before { - content: "\f04b"; -} -.fa-pause:before { - content: "\f04c"; -} -.fa-stop:before { - content: "\f04d"; -} -.fa-forward:before { - content: "\f04e"; -} -.fa-fast-forward:before { - content: "\f050"; -} -.fa-step-forward:before { - content: "\f051"; -} -.fa-eject:before { - content: "\f052"; -} -.fa-chevron-left:before { - content: "\f053"; -} -.fa-chevron-right:before { - content: "\f054"; -} -.fa-plus-circle:before { - content: "\f055"; -} -.fa-minus-circle:before { - content: "\f056"; -} -.fa-times-circle:before { - content: "\f057"; -} -.fa-check-circle:before { - content: "\f058"; -} -.fa-question-circle:before { - content: "\f059"; -} -.fa-info-circle:before { - content: "\f05a"; -} -.fa-crosshairs:before { - content: "\f05b"; -} -.fa-times-circle-o:before { - content: "\f05c"; -} -.fa-check-circle-o:before { - content: "\f05d"; -} -.fa-ban:before { - content: "\f05e"; -} -.fa-arrow-left:before { - content: "\f060"; -} -.fa-arrow-right:before { - content: "\f061"; -} -.fa-arrow-up:before { - content: "\f062"; -} -.fa-arrow-down:before { - content: "\f063"; -} -.fa-mail-forward:before, -.fa-share:before { - content: "\f064"; -} -.fa-expand:before { - content: "\f065"; -} -.fa-compress:before { - content: "\f066"; -} -.fa-plus:before { - content: "\f067"; -} -.fa-minus:before { - content: "\f068"; -} -.fa-asterisk:before { - content: "\f069"; -} -.fa-exclamation-circle:before { - content: "\f06a"; -} -.fa-gift:before { - content: "\f06b"; -} -.fa-leaf:before { - content: "\f06c"; -} -.fa-fire:before { - content: "\f06d"; -} -.fa-eye:before { - content: "\f06e"; -} -.fa-eye-slash:before { - content: "\f070"; -} -.fa-warning:before, -.fa-exclamation-triangle:before { - content: "\f071"; -} -.fa-plane:before { - content: "\f072"; -} -.fa-calendar:before { - content: "\f073"; -} -.fa-random:before { - content: "\f074"; -} -.fa-comment:before { - content: "\f075"; -} -.fa-magnet:before { - content: "\f076"; -} -.fa-chevron-up:before { - content: "\f077"; -} -.fa-chevron-down:before { - content: "\f078"; -} -.fa-retweet:before { - content: "\f079"; -} -.fa-shopping-cart:before { - content: "\f07a"; -} -.fa-folder:before { - content: "\f07b"; -} -.fa-folder-open:before { - content: "\f07c"; -} -.fa-arrows-v:before { - content: "\f07d"; -} -.fa-arrows-h:before { - content: "\f07e"; -} -.fa-bar-chart-o:before, -.fa-bar-chart:before { - content: "\f080"; -} -.fa-twitter-square:before { - content: "\f081"; -} -.fa-facebook-square:before { - content: "\f082"; -} -.fa-camera-retro:before { - content: "\f083"; -} -.fa-key:before { - content: "\f084"; -} -.fa-gears:before, -.fa-cogs:before { - content: "\f085"; -} -.fa-comments:before { - content: "\f086"; -} -.fa-thumbs-o-up:before { - content: "\f087"; -} -.fa-thumbs-o-down:before { - content: "\f088"; -} -.fa-star-half:before { - content: "\f089"; -} -.fa-heart-o:before { - content: "\f08a"; -} -.fa-sign-out:before { - content: "\f08b"; -} -.fa-linkedin-square:before { - content: "\f08c"; -} -.fa-thumb-tack:before { - content: "\f08d"; -} -.fa-external-link:before { - content: "\f08e"; -} -.fa-sign-in:before { - content: "\f090"; -} -.fa-trophy:before { - content: "\f091"; -} -.fa-github-square:before { - content: "\f092"; -} -.fa-upload:before { - content: "\f093"; -} -.fa-lemon-o:before { - content: "\f094"; -} -.fa-phone:before { - content: "\f095"; -} -.fa-square-o:before { - content: "\f096"; -} -.fa-bookmark-o:before { - content: "\f097"; -} -.fa-phone-square:before { - content: "\f098"; -} -.fa-twitter:before { - content: "\f099"; -} -.fa-facebook:before { - content: "\f09a"; -} -.fa-github:before { - content: "\f09b"; -} -.fa-unlock:before { - content: "\f09c"; -} -.fa-credit-card:before { - content: "\f09d"; -} -.fa-rss:before { - content: "\f09e"; -} -.fa-hdd-o:before { - content: "\f0a0"; -} -.fa-bullhorn:before { - content: "\f0a1"; -} -.fa-bell:before { - content: "\f0f3"; -} -.fa-certificate:before { - content: "\f0a3"; -} -.fa-hand-o-right:before { - content: "\f0a4"; -} -.fa-hand-o-left:before { - content: "\f0a5"; -} -.fa-hand-o-up:before { - content: "\f0a6"; -} -.fa-hand-o-down:before { - content: "\f0a7"; -} -.fa-arrow-circle-left:before { - content: "\f0a8"; -} -.fa-arrow-circle-right:before { - content: "\f0a9"; -} -.fa-arrow-circle-up:before { - content: "\f0aa"; -} -.fa-arrow-circle-down:before { - content: "\f0ab"; -} -.fa-globe:before { - content: "\f0ac"; -} -.fa-wrench:before { - content: "\f0ad"; -} -.fa-tasks:before { - content: "\f0ae"; -} -.fa-filter:before { - content: "\f0b0"; -} -.fa-briefcase:before { - content: "\f0b1"; -} -.fa-arrows-alt:before { - content: "\f0b2"; -} -.fa-group:before, -.fa-users:before { - content: "\f0c0"; -} -.fa-chain:before, -.fa-link:before { - content: "\f0c1"; -} -.fa-cloud:before { - content: "\f0c2"; -} -.fa-flask:before { - content: "\f0c3"; -} -.fa-cut:before, -.fa-scissors:before { - content: "\f0c4"; -} -.fa-copy:before, -.fa-files-o:before { - content: "\f0c5"; -} -.fa-paperclip:before { - content: "\f0c6"; -} -.fa-save:before, -.fa-floppy-o:before { - content: "\f0c7"; -} -.fa-square:before { - content: "\f0c8"; -} -.fa-navicon:before, -.fa-reorder:before, -.fa-bars:before { - content: "\f0c9"; -} -.fa-list-ul:before { - content: "\f0ca"; -} -.fa-list-ol:before { - content: "\f0cb"; -} -.fa-strikethrough:before { - content: "\f0cc"; -} -.fa-underline:before { - content: "\f0cd"; -} -.fa-table:before { - content: "\f0ce"; -} -.fa-magic:before { - content: "\f0d0"; -} -.fa-truck:before { - content: "\f0d1"; -} -.fa-pinterest:before { - content: "\f0d2"; -} -.fa-pinterest-square:before { - content: "\f0d3"; -} -.fa-google-plus-square:before { - content: "\f0d4"; -} -.fa-google-plus:before { - content: "\f0d5"; -} -.fa-money:before { - content: "\f0d6"; -} -.fa-caret-down:before { - content: "\f0d7"; -} -.fa-caret-up:before { - content: "\f0d8"; -} -.fa-caret-left:before { - content: "\f0d9"; -} -.fa-caret-right:before { - content: "\f0da"; -} -.fa-columns:before { - content: "\f0db"; -} -.fa-unsorted:before, -.fa-sort:before { - content: "\f0dc"; -} -.fa-sort-down:before, -.fa-sort-desc:before { - content: "\f0dd"; -} -.fa-sort-up:before, -.fa-sort-asc:before { - content: "\f0de"; -} -.fa-envelope:before { - content: "\f0e0"; -} -.fa-linkedin:before { - content: "\f0e1"; -} -.fa-rotate-left:before, -.fa-undo:before { - content: "\f0e2"; -} -.fa-legal:before, -.fa-gavel:before { - content: "\f0e3"; -} -.fa-dashboard:before, -.fa-tachometer:before { - content: "\f0e4"; -} -.fa-comment-o:before { - content: "\f0e5"; -} -.fa-comments-o:before { - content: "\f0e6"; -} -.fa-flash:before, -.fa-bolt:before { - content: "\f0e7"; -} -.fa-sitemap:before { - content: "\f0e8"; -} -.fa-umbrella:before { - content: "\f0e9"; -} -.fa-paste:before, -.fa-clipboard:before { - content: "\f0ea"; -} -.fa-lightbulb-o:before { - content: "\f0eb"; -} -.fa-exchange:before { - content: "\f0ec"; -} -.fa-cloud-download:before { - content: "\f0ed"; -} -.fa-cloud-upload:before { - content: "\f0ee"; -} -.fa-user-md:before { - content: "\f0f0"; -} -.fa-stethoscope:before { - content: "\f0f1"; -} -.fa-suitcase:before { - content: "\f0f2"; -} -.fa-bell-o:before { - content: "\f0a2"; -} -.fa-coffee:before { - content: "\f0f4"; -} -.fa-cutlery:before { - content: "\f0f5"; -} -.fa-file-text-o:before { - content: "\f0f6"; -} -.fa-building-o:before { - content: "\f0f7"; -} -.fa-hospital-o:before { - content: "\f0f8"; -} -.fa-ambulance:before { - content: "\f0f9"; -} -.fa-medkit:before { - content: "\f0fa"; -} -.fa-fighter-jet:before { - content: "\f0fb"; -} -.fa-beer:before { - content: "\f0fc"; -} -.fa-h-square:before { - content: "\f0fd"; -} -.fa-plus-square:before { - content: "\f0fe"; -} -.fa-angle-double-left:before { - content: "\f100"; -} -.fa-angle-double-right:before { - content: "\f101"; -} -.fa-angle-double-up:before { - content: "\f102"; -} -.fa-angle-double-down:before { - content: "\f103"; -} -.fa-angle-left:before { - content: "\f104"; -} -.fa-angle-right:before { - content: "\f105"; -} -.fa-angle-up:before { - content: "\f106"; -} -.fa-angle-down:before { - content: "\f107"; -} -.fa-desktop:before { - content: "\f108"; -} -.fa-laptop:before { - content: "\f109"; -} -.fa-tablet:before { - content: "\f10a"; -} -.fa-mobile-phone:before, -.fa-mobile:before { - content: "\f10b"; -} -.fa-circle-o:before { - content: "\f10c"; -} -.fa-quote-left:before { - content: "\f10d"; -} -.fa-quote-right:before { - content: "\f10e"; -} -.fa-spinner:before { - content: "\f110"; -} -.fa-circle:before { - content: "\f111"; -} -.fa-mail-reply:before, -.fa-reply:before { - content: "\f112"; -} -.fa-github-alt:before { - content: "\f113"; -} -.fa-folder-o:before { - content: "\f114"; -} -.fa-folder-open-o:before { - content: "\f115"; -} -.fa-smile-o:before { - content: "\f118"; -} -.fa-frown-o:before { - content: "\f119"; -} -.fa-meh-o:before { - content: "\f11a"; -} -.fa-gamepad:before { - content: "\f11b"; -} -.fa-keyboard-o:before { - content: "\f11c"; -} -.fa-flag-o:before { - content: "\f11d"; -} -.fa-flag-checkered:before { - content: "\f11e"; -} -.fa-terminal:before { - content: "\f120"; -} -.fa-code:before { - content: "\f121"; -} -.fa-mail-reply-all:before, -.fa-reply-all:before { - content: "\f122"; -} -.fa-star-half-empty:before, -.fa-star-half-full:before, -.fa-star-half-o:before { - content: "\f123"; -} -.fa-location-arrow:before { - content: "\f124"; -} -.fa-crop:before { - content: "\f125"; -} -.fa-code-fork:before { - content: "\f126"; -} -.fa-unlink:before, -.fa-chain-broken:before { - content: "\f127"; -} -.fa-question:before { - content: "\f128"; -} -.fa-info:before { - content: "\f129"; -} -.fa-exclamation:before { - content: "\f12a"; -} -.fa-superscript:before { - content: "\f12b"; -} -.fa-subscript:before { - content: "\f12c"; -} -.fa-eraser:before { - content: "\f12d"; -} -.fa-puzzle-piece:before { - content: "\f12e"; -} -.fa-microphone:before { - content: "\f130"; -} -.fa-microphone-slash:before { - content: "\f131"; -} -.fa-shield:before { - content: "\f132"; -} -.fa-calendar-o:before { - content: "\f133"; -} -.fa-fire-extinguisher:before { - content: "\f134"; -} -.fa-rocket:before { - content: "\f135"; -} -.fa-maxcdn:before { - content: "\f136"; -} -.fa-chevron-circle-left:before { - content: "\f137"; -} -.fa-chevron-circle-right:before { - content: "\f138"; -} -.fa-chevron-circle-up:before { - content: "\f139"; -} -.fa-chevron-circle-down:before { - content: "\f13a"; -} -.fa-html5:before { - content: "\f13b"; -} -.fa-css3:before { - content: "\f13c"; -} -.fa-anchor:before { - content: "\f13d"; -} -.fa-unlock-alt:before { - content: "\f13e"; -} -.fa-bullseye:before { - content: "\f140"; -} -.fa-ellipsis-h:before { - content: "\f141"; -} -.fa-ellipsis-v:before { - content: "\f142"; -} -.fa-rss-square:before { - content: "\f143"; -} -.fa-play-circle:before { - content: "\f144"; -} -.fa-ticket:before { - content: "\f145"; -} -.fa-minus-square:before { - content: "\f146"; -} -.fa-minus-square-o:before { - content: "\f147"; -} -.fa-level-up:before { - content: "\f148"; -} -.fa-level-down:before { - content: "\f149"; -} -.fa-check-square:before { - content: "\f14a"; -} -.fa-pencil-square:before { - content: "\f14b"; -} -.fa-external-link-square:before { - content: "\f14c"; -} -.fa-share-square:before { - content: "\f14d"; -} -.fa-compass:before { - content: "\f14e"; -} -.fa-toggle-down:before, -.fa-caret-square-o-down:before { - content: "\f150"; -} -.fa-toggle-up:before, -.fa-caret-square-o-up:before { - content: "\f151"; -} -.fa-toggle-right:before, -.fa-caret-square-o-right:before { - content: "\f152"; -} -.fa-euro:before, -.fa-eur:before { - content: "\f153"; -} -.fa-gbp:before { - content: "\f154"; -} -.fa-dollar:before, -.fa-usd:before { - content: "\f155"; -} -.fa-rupee:before, -.fa-inr:before { - content: "\f156"; -} -.fa-cny:before, -.fa-rmb:before, -.fa-yen:before, -.fa-jpy:before { - content: "\f157"; -} -.fa-ruble:before, -.fa-rouble:before, -.fa-rub:before { - content: "\f158"; -} -.fa-won:before, -.fa-krw:before { - content: "\f159"; -} -.fa-bitcoin:before, -.fa-btc:before { - content: "\f15a"; -} -.fa-file:before { - content: "\f15b"; -} -.fa-file-text:before { - content: "\f15c"; -} -.fa-sort-alpha-asc:before { - content: "\f15d"; -} -.fa-sort-alpha-desc:before { - content: "\f15e"; -} -.fa-sort-amount-asc:before { - content: "\f160"; -} -.fa-sort-amount-desc:before { - content: "\f161"; -} -.fa-sort-numeric-asc:before { - content: "\f162"; -} -.fa-sort-numeric-desc:before { - content: "\f163"; -} -.fa-thumbs-up:before { - content: "\f164"; -} -.fa-thumbs-down:before { - content: "\f165"; -} -.fa-youtube-square:before { - content: "\f166"; -} -.fa-youtube:before { - content: "\f167"; -} -.fa-xing:before { - content: "\f168"; -} -.fa-xing-square:before { - content: "\f169"; -} -.fa-youtube-play:before { - content: "\f16a"; -} -.fa-dropbox:before { - content: "\f16b"; -} -.fa-stack-overflow:before { - content: "\f16c"; -} -.fa-instagram:before { - content: "\f16d"; -} -.fa-flickr:before { - content: "\f16e"; -} -.fa-adn:before { - content: "\f170"; -} -.fa-bitbucket:before { - content: "\f171"; -} -.fa-bitbucket-square:before { - content: "\f172"; -} -.fa-tumblr:before { - content: "\f173"; -} -.fa-tumblr-square:before { - content: "\f174"; -} -.fa-long-arrow-down:before { - content: "\f175"; -} -.fa-long-arrow-up:before { - content: "\f176"; -} -.fa-long-arrow-left:before { - content: "\f177"; -} -.fa-long-arrow-right:before { - content: "\f178"; -} -.fa-apple:before { - content: "\f179"; -} -.fa-windows:before { - content: "\f17a"; -} -.fa-android:before { - content: "\f17b"; -} -.fa-linux:before { - content: "\f17c"; -} -.fa-dribbble:before { - content: "\f17d"; -} -.fa-skype:before { - content: "\f17e"; -} -.fa-foursquare:before { - content: "\f180"; -} -.fa-trello:before { - content: "\f181"; -} -.fa-female:before { - content: "\f182"; -} -.fa-male:before { - content: "\f183"; -} -.fa-gittip:before { - content: "\f184"; -} -.fa-sun-o:before { - content: "\f185"; -} -.fa-moon-o:before { - content: "\f186"; -} -.fa-archive:before { - content: "\f187"; -} -.fa-bug:before { - content: "\f188"; -} -.fa-vk:before { - content: "\f189"; -} -.fa-weibo:before { - content: "\f18a"; -} -.fa-renren:before { - content: "\f18b"; -} -.fa-pagelines:before { - content: "\f18c"; -} -.fa-stack-exchange:before { - content: "\f18d"; -} -.fa-arrow-circle-o-right:before { - content: "\f18e"; -} -.fa-arrow-circle-o-left:before { - content: "\f190"; -} -.fa-toggle-left:before, -.fa-caret-square-o-left:before { - content: "\f191"; -} -.fa-dot-circle-o:before { - content: "\f192"; -} -.fa-wheelchair:before { - content: "\f193"; -} -.fa-vimeo-square:before { - content: "\f194"; -} -.fa-turkish-lira:before, -.fa-try:before { - content: "\f195"; -} -.fa-plus-square-o:before { - content: "\f196"; -} -.fa-space-shuttle:before { - content: "\f197"; -} -.fa-slack:before { - content: "\f198"; -} -.fa-envelope-square:before { - content: "\f199"; -} -.fa-wordpress:before { - content: "\f19a"; -} -.fa-openid:before { - content: "\f19b"; -} -.fa-institution:before, -.fa-bank:before, -.fa-university:before { - content: "\f19c"; -} -.fa-mortar-board:before, -.fa-graduation-cap:before { - content: "\f19d"; -} -.fa-yahoo:before { - content: "\f19e"; -} -.fa-google:before { - content: "\f1a0"; -} -.fa-reddit:before { - content: "\f1a1"; -} -.fa-reddit-square:before { - content: "\f1a2"; -} -.fa-stumbleupon-circle:before { - content: "\f1a3"; -} -.fa-stumbleupon:before { - content: "\f1a4"; -} -.fa-delicious:before { - content: "\f1a5"; -} -.fa-digg:before { - content: "\f1a6"; -} -.fa-pied-piper:before { - content: "\f1a7"; -} -.fa-pied-piper-alt:before { - content: "\f1a8"; -} -.fa-drupal:before { - content: "\f1a9"; -} -.fa-joomla:before { - content: "\f1aa"; -} -.fa-language:before { - content: "\f1ab"; -} -.fa-fax:before { - content: "\f1ac"; -} -.fa-building:before { - content: "\f1ad"; -} -.fa-child:before { - content: "\f1ae"; -} -.fa-paw:before { - content: "\f1b0"; -} -.fa-spoon:before { - content: "\f1b1"; -} -.fa-cube:before { - content: "\f1b2"; -} -.fa-cubes:before { - content: "\f1b3"; -} -.fa-behance:before { - content: "\f1b4"; -} -.fa-behance-square:before { - content: "\f1b5"; -} -.fa-steam:before { - content: "\f1b6"; -} -.fa-steam-square:before { - content: "\f1b7"; -} -.fa-recycle:before { - content: "\f1b8"; -} -.fa-automobile:before, -.fa-car:before { - content: "\f1b9"; -} -.fa-cab:before, -.fa-taxi:before { - content: "\f1ba"; -} -.fa-tree:before { - content: "\f1bb"; -} -.fa-spotify:before { - content: "\f1bc"; -} -.fa-deviantart:before { - content: "\f1bd"; -} -.fa-soundcloud:before { - content: "\f1be"; -} -.fa-database:before { - content: "\f1c0"; -} -.fa-file-pdf-o:before { - content: "\f1c1"; -} -.fa-file-word-o:before { - content: "\f1c2"; -} -.fa-file-excel-o:before { - content: "\f1c3"; -} -.fa-file-powerpoint-o:before { - content: "\f1c4"; -} -.fa-file-photo-o:before, -.fa-file-picture-o:before, -.fa-file-image-o:before { - content: "\f1c5"; -} -.fa-file-zip-o:before, -.fa-file-archive-o:before { - content: "\f1c6"; -} -.fa-file-sound-o:before, -.fa-file-audio-o:before { - content: "\f1c7"; -} -.fa-file-movie-o:before, -.fa-file-video-o:before { - content: "\f1c8"; -} -.fa-file-code-o:before { - content: "\f1c9"; -} -.fa-vine:before { - content: "\f1ca"; -} -.fa-codepen:before { - content: "\f1cb"; -} -.fa-jsfiddle:before { - content: "\f1cc"; -} -.fa-life-bouy:before, -.fa-life-buoy:before, -.fa-life-saver:before, -.fa-support:before, -.fa-life-ring:before { - content: "\f1cd"; -} -.fa-circle-o-notch:before { - content: "\f1ce"; -} -.fa-ra:before, -.fa-rebel:before { - content: "\f1d0"; -} -.fa-ge:before, -.fa-empire:before { - content: "\f1d1"; -} -.fa-git-square:before { - content: "\f1d2"; -} -.fa-git:before { - content: "\f1d3"; -} -.fa-hacker-news:before { - content: "\f1d4"; -} -.fa-tencent-weibo:before { - content: "\f1d5"; -} -.fa-qq:before { - content: "\f1d6"; -} -.fa-wechat:before, -.fa-weixin:before { - content: "\f1d7"; -} -.fa-send:before, -.fa-paper-plane:before { - content: "\f1d8"; -} -.fa-send-o:before, -.fa-paper-plane-o:before { - content: "\f1d9"; -} -.fa-history:before { - content: "\f1da"; -} -.fa-circle-thin:before { - content: "\f1db"; -} -.fa-header:before { - content: "\f1dc"; -} -.fa-paragraph:before { - content: "\f1dd"; -} -.fa-sliders:before { - content: "\f1de"; -} -.fa-share-alt:before { - content: "\f1e0"; -} -.fa-share-alt-square:before { - content: "\f1e1"; -} -.fa-bomb:before { - content: "\f1e2"; -} -.fa-soccer-ball-o:before, -.fa-futbol-o:before { - content: "\f1e3"; -} -.fa-tty:before { - content: "\f1e4"; -} -.fa-binoculars:before { - content: "\f1e5"; -} -.fa-plug:before { - content: "\f1e6"; -} -.fa-slideshare:before { - content: "\f1e7"; -} -.fa-twitch:before { - content: "\f1e8"; -} -.fa-yelp:before { - content: "\f1e9"; -} -.fa-newspaper-o:before { - content: "\f1ea"; -} -.fa-wifi:before { - content: "\f1eb"; -} -.fa-calculator:before { - content: "\f1ec"; -} -.fa-paypal:before { - content: "\f1ed"; -} -.fa-google-wallet:before { - content: "\f1ee"; -} -.fa-cc-visa:before { - content: "\f1f0"; -} -.fa-cc-mastercard:before { - content: "\f1f1"; -} -.fa-cc-discover:before { - content: "\f1f2"; -} -.fa-cc-amex:before { - content: "\f1f3"; -} -.fa-cc-paypal:before { - content: "\f1f4"; -} -.fa-cc-stripe:before { - content: "\f1f5"; -} -.fa-bell-slash:before { - content: "\f1f6"; -} -.fa-bell-slash-o:before { - content: "\f1f7"; -} -.fa-trash:before { - content: "\f1f8"; -} -.fa-copyright:before { - content: "\f1f9"; -} -.fa-at:before { - content: "\f1fa"; -} -.fa-eyedropper:before { - content: "\f1fb"; -} -.fa-paint-brush:before { - content: "\f1fc"; -} -.fa-birthday-cake:before { - content: "\f1fd"; -} -.fa-area-chart:before { - content: "\f1fe"; -} -.fa-pie-chart:before { - content: "\f200"; -} -.fa-line-chart:before { - content: "\f201"; -} -.fa-lastfm:before { - content: "\f202"; -} -.fa-lastfm-square:before { - content: "\f203"; -} -.fa-toggle-off:before { - content: "\f204"; -} -.fa-toggle-on:before { - content: "\f205"; -} -.fa-bicycle:before { - content: "\f206"; -} -.fa-bus:before { - content: "\f207"; -} -.fa-ioxhost:before { - content: "\f208"; -} -.fa-angellist:before { - content: "\f209"; -} -.fa-cc:before { - content: "\f20a"; -} -.fa-shekel:before, -.fa-sheqel:before, -.fa-ils:before { - content: "\f20b"; -} -.fa-meanpath:before { - content: "\f20c"; -} diff --git a/assets/css/font-awesome/css/font-awesome.min.css b/assets/css/font-awesome/css/font-awesome.min.css deleted file mode 100755 index 8336eb5d..00000000 --- a/assets/css/font-awesome/css/font-awesome.min.css +++ /dev/null @@ -1,4 +0,0 @@ -/*! - * Font Awesome 4.2.0 by @davegandy - http://fontawesome.io - @fontawesome - * License - http://fontawesome.io/license (Font: SIL OFL 1.1, CSS: MIT License) - */@font-face{font-family:'FontAwesome';src:url('../fonts/fontawesome-webfont.eot?v=4.2.0');src:url('../fonts/fontawesome-webfont.eot?#iefix&v=4.2.0') format('embedded-opentype'),url('../fonts/fontawesome-webfont.woff?v=4.2.0') format('woff'),url('../fonts/fontawesome-webfont.ttf?v=4.2.0') format('truetype'),url('../fonts/fontawesome-webfont.svg?v=4.2.0#fontawesomeregular') format('svg');font-weight:normal;font-style:normal}.fa{display:inline-block;font:normal normal normal 14px/1 FontAwesome;font-size:inherit;text-rendering:auto;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale} .fa-lg{font-size:1.33333333em;line-height:.75em;vertical-align:-15%} .fa-2x{font-size:2em} .fa-3x{font-size:3em} .fa-4x{font-size:4em} .fa-5x{font-size:5em} .fa-fw{width:1.28571429em;text-align:center} .fa-ul{padding-left:0;margin-left:2.14285714em;list-style-type:none} .fa-ul>li{position:relative} .fa-li{position:absolute;left:-2.14285714em;width:2.14285714em;top:.14285714em;text-align:center} .fa-li.fa-lg{left:-1.85714286em} .fa-border{padding:.2em .25em .15em;border:solid .08em #eee;border-radius:.1em} .pull-right{float:right} .pull-left{float:left} .fa.pull-left{margin-right:.3em} .fa.pull-right{margin-left:.3em} .fa-spin{-webkit-animation:fa-spin 2s infinite linear;animation:fa-spin 2s infinite linear} @-webkit-keyframes fa-spin{0%{-webkit-transform:rotate(0deg);transform:rotate(0deg)}100%{-webkit-transform:rotate(359deg);transform:rotate(359deg)}} @keyframes fa-spin{0%{-webkit-transform:rotate(0deg);transform:rotate(0deg)}100%{-webkit-transform:rotate(359deg);transform:rotate(359deg)}} .fa-rotate-90{filter:progid:DXImageTransform.Microsoft.BasicImage(rotation=1);-webkit-transform:rotate(90deg);-ms-transform:rotate(90deg);transform:rotate(90deg)} .fa-rotate-180{filter:progid:DXImageTransform.Microsoft.BasicImage(rotation=2);-webkit-transform:rotate(180deg);-ms-transform:rotate(180deg);transform:rotate(180deg)} .fa-rotate-270{filter:progid:DXImageTransform.Microsoft.BasicImage(rotation=3);-webkit-transform:rotate(270deg);-ms-transform:rotate(270deg);transform:rotate(270deg)} .fa-flip-horizontal{filter:progid:DXImageTransform.Microsoft.BasicImage(rotation=0, mirror=1);-webkit-transform:scale(-1, 1);-ms-transform:scale(-1, 1);transform:scale(-1, 1)} .fa-flip-vertical{filter:progid:DXImageTransform.Microsoft.BasicImage(rotation=2, mirror=1);-webkit-transform:scale(1, -1);-ms-transform:scale(1, -1);transform:scale(1, -1)} :root .fa-rotate-90,:root .fa-rotate-180,:root .fa-rotate-270,:root .fa-flip-horizontal,:root .fa-flip-vertical{filter:none} .fa-stack{position:relative;display:inline-block;width:2em;height:2em;line-height:2em;vertical-align:middle} .fa-stack-1x,.fa-stack-2x{position:absolute;left:0;width:100%;text-align:center} .fa-stack-1x{line-height:inherit} .fa-stack-2x{font-size:2em} .fa-inverse{color:#fff} .fa-glass:before{content:"\f000"} .fa-music:before{content:"\f001"} .fa-search:before{content:"\f002"} .fa-envelope-o:before{content:"\f003"} .fa-heart:before{content:"\f004"} .fa-star:before{content:"\f005"} .fa-star-o:before{content:"\f006"} .fa-user:before{content:"\f007"} .fa-film:before{content:"\f008"} .fa-th-large:before{content:"\f009"} .fa-th:before{content:"\f00a"} .fa-th-list:before{content:"\f00b"} .fa-check:before{content:"\f00c"} .fa-remove:before,.fa-close:before,.fa-times:before{content:"\f00d"} .fa-search-plus:before{content:"\f00e"} .fa-search-minus:before{content:"\f010"} .fa-power-off:before{content:"\f011"} .fa-signal:before{content:"\f012"} .fa-gear:before,.fa-cog:before{content:"\f013"} .fa-trash-o:before{content:"\f014"} .fa-home:before{content:"\f015"} .fa-file-o:before{content:"\f016"} .fa-clock-o:before{content:"\f017"} .fa-road:before{content:"\f018"} .fa-download:before{content:"\f019"} .fa-arrow-circle-o-down:before{content:"\f01a"} .fa-arrow-circle-o-up:before{content:"\f01b"} .fa-inbox:before{content:"\f01c"} .fa-play-circle-o:before{content:"\f01d"} .fa-rotate-right:before,.fa-repeat:before{content:"\f01e"} .fa-refresh:before{content:"\f021"} .fa-list-alt:before{content:"\f022"} .fa-lock:before{content:"\f023"} .fa-flag:before{content:"\f024"} .fa-headphones:before{content:"\f025"} .fa-volume-off:before{content:"\f026"} .fa-volume-down:before{content:"\f027"} .fa-volume-up:before{content:"\f028"} .fa-qrcode:before{content:"\f029"} .fa-barcode:before{content:"\f02a"} .fa-tag:before{content:"\f02b"} .fa-tags:before{content:"\f02c"} .fa-book:before{content:"\f02d"} .fa-bookmark:before{content:"\f02e"} .fa-print:before{content:"\f02f"} .fa-camera:before{content:"\f030"} .fa-font:before{content:"\f031"} .fa-bold:before{content:"\f032"} .fa-italic:before{content:"\f033"} .fa-text-height:before{content:"\f034"} .fa-text-width:before{content:"\f035"} .fa-align-left:before{content:"\f036"} .fa-align-center:before{content:"\f037"} .fa-align-right:before{content:"\f038"} .fa-align-justify:before{content:"\f039"} .fa-list:before{content:"\f03a"} .fa-dedent:before,.fa-outdent:before{content:"\f03b"} .fa-indent:before{content:"\f03c"} .fa-video-camera:before{content:"\f03d"} .fa-photo:before,.fa-image:before,.fa-picture-o:before{content:"\f03e"} .fa-pencil:before{content:"\f040"} .fa-map-marker:before{content:"\f041"} .fa-adjust:before{content:"\f042"} .fa-tint:before{content:"\f043"} .fa-edit:before,.fa-pencil-square-o:before{content:"\f044"} .fa-share-square-o:before{content:"\f045"} .fa-check-square-o:before{content:"\f046"} .fa-arrows:before{content:"\f047"} .fa-step-backward:before{content:"\f048"} .fa-fast-backward:before{content:"\f049"} .fa-backward:before{content:"\f04a"} .fa-play:before{content:"\f04b"} .fa-pause:before{content:"\f04c"} .fa-stop:before{content:"\f04d"} .fa-forward:before{content:"\f04e"} .fa-fast-forward:before{content:"\f050"} .fa-step-forward:before{content:"\f051"} .fa-eject:before{content:"\f052"} .fa-chevron-left:before{content:"\f053"} .fa-chevron-right:before{content:"\f054"} .fa-plus-circle:before{content:"\f055"} .fa-minus-circle:before{content:"\f056"} .fa-times-circle:before{content:"\f057"} .fa-check-circle:before{content:"\f058"} .fa-question-circle:before{content:"\f059"} .fa-info-circle:before{content:"\f05a"} .fa-crosshairs:before{content:"\f05b"} .fa-times-circle-o:before{content:"\f05c"} .fa-check-circle-o:before{content:"\f05d"} .fa-ban:before{content:"\f05e"} .fa-arrow-left:before{content:"\f060"} .fa-arrow-right:before{content:"\f061"} .fa-arrow-up:before{content:"\f062"} .fa-arrow-down:before{content:"\f063"} .fa-mail-forward:before,.fa-share:before{content:"\f064"} .fa-expand:before{content:"\f065"} .fa-compress:before{content:"\f066"} .fa-plus:before{content:"\f067"} .fa-minus:before{content:"\f068"} .fa-asterisk:before{content:"\f069"} .fa-exclamation-circle:before{content:"\f06a"} .fa-gift:before{content:"\f06b"} .fa-leaf:before{content:"\f06c"} .fa-fire:before{content:"\f06d"} .fa-eye:before{content:"\f06e"} .fa-eye-slash:before{content:"\f070"} .fa-warning:before,.fa-exclamation-triangle:before{content:"\f071"} .fa-plane:before{content:"\f072"} .fa-calendar:before{content:"\f073"} .fa-random:before{content:"\f074"} .fa-comment:before{content:"\f075"} .fa-magnet:before{content:"\f076"} .fa-chevron-up:before{content:"\f077"} .fa-chevron-down:before{content:"\f078"} .fa-retweet:before{content:"\f079"} .fa-shopping-cart:before{content:"\f07a"} .fa-folder:before{content:"\f07b"} .fa-folder-open:before{content:"\f07c"} .fa-arrows-v:before{content:"\f07d"} .fa-arrows-h:before{content:"\f07e"} .fa-bar-chart-o:before,.fa-bar-chart:before{content:"\f080"} .fa-twitter-square:before{content:"\f081"} .fa-facebook-square:before{content:"\f082"} .fa-camera-retro:before{content:"\f083"} .fa-key:before{content:"\f084"} .fa-gears:before,.fa-cogs:before{content:"\f085"} .fa-comments:before{content:"\f086"} .fa-thumbs-o-up:before{content:"\f087"} .fa-thumbs-o-down:before{content:"\f088"} .fa-star-half:before{content:"\f089"} .fa-heart-o:before{content:"\f08a"} .fa-sign-out:before{content:"\f08b"} .fa-linkedin-square:before{content:"\f08c"} .fa-thumb-tack:before{content:"\f08d"} .fa-external-link:before{content:"\f08e"} .fa-sign-in:before{content:"\f090"} .fa-trophy:before{content:"\f091"} .fa-github-square:before{content:"\f092"} .fa-upload:before{content:"\f093"} .fa-lemon-o:before{content:"\f094"} .fa-phone:before{content:"\f095"} .fa-square-o:before{content:"\f096"} .fa-bookmark-o:before{content:"\f097"} .fa-phone-square:before{content:"\f098"} .fa-twitter:before{content:"\f099"} .fa-facebook:before{content:"\f09a"} .fa-github:before{content:"\f09b"} .fa-unlock:before{content:"\f09c"} .fa-credit-card:before{content:"\f09d"} .fa-rss:before{content:"\f09e"} .fa-hdd-o:before{content:"\f0a0"} .fa-bullhorn:before{content:"\f0a1"} .fa-bell:before{content:"\f0f3"} .fa-certificate:before{content:"\f0a3"} .fa-hand-o-right:before{content:"\f0a4"} .fa-hand-o-left:before{content:"\f0a5"} .fa-hand-o-up:before{content:"\f0a6"} .fa-hand-o-down:before{content:"\f0a7"} .fa-arrow-circle-left:before{content:"\f0a8"} .fa-arrow-circle-right:before{content:"\f0a9"} .fa-arrow-circle-up:before{content:"\f0aa"} .fa-arrow-circle-down:before{content:"\f0ab"} .fa-globe:before{content:"\f0ac"} .fa-wrench:before{content:"\f0ad"} .fa-tasks:before{content:"\f0ae"} .fa-filter:before{content:"\f0b0"} .fa-briefcase:before{content:"\f0b1"} .fa-arrows-alt:before{content:"\f0b2"} .fa-group:before,.fa-users:before{content:"\f0c0"} .fa-chain:before,.fa-link:before{content:"\f0c1"} .fa-cloud:before{content:"\f0c2"} .fa-flask:before{content:"\f0c3"} .fa-cut:before,.fa-scissors:before{content:"\f0c4"} .fa-copy:before,.fa-files-o:before{content:"\f0c5"} .fa-paperclip:before{content:"\f0c6"} .fa-save:before,.fa-floppy-o:before{content:"\f0c7"} .fa-square:before{content:"\f0c8"} .fa-navicon:before,.fa-reorder:before,.fa-bars:before{content:"\f0c9"} .fa-list-ul:before{content:"\f0ca"} .fa-list-ol:before{content:"\f0cb"} .fa-strikethrough:before{content:"\f0cc"} .fa-underline:before{content:"\f0cd"} .fa-table:before{content:"\f0ce"} .fa-magic:before{content:"\f0d0"} .fa-truck:before{content:"\f0d1"} .fa-pinterest:before{content:"\f0d2"} .fa-pinterest-square:before{content:"\f0d3"} .fa-google-plus-square:before{content:"\f0d4"} .fa-google-plus:before{content:"\f0d5"} .fa-money:before{content:"\f0d6"} .fa-caret-down:before{content:"\f0d7"} .fa-caret-up:before{content:"\f0d8"} .fa-caret-left:before{content:"\f0d9"} .fa-caret-right:before{content:"\f0da"} .fa-columns:before{content:"\f0db"} .fa-unsorted:before,.fa-sort:before{content:"\f0dc"} .fa-sort-down:before,.fa-sort-desc:before{content:"\f0dd"} .fa-sort-up:before,.fa-sort-asc:before{content:"\f0de"} .fa-envelope:before{content:"\f0e0"} .fa-linkedin:before{content:"\f0e1"} .fa-rotate-left:before,.fa-undo:before{content:"\f0e2"} .fa-legal:before,.fa-gavel:before{content:"\f0e3"} .fa-dashboard:before,.fa-tachometer:before{content:"\f0e4"} .fa-comment-o:before{content:"\f0e5"} .fa-comments-o:before{content:"\f0e6"} .fa-flash:before,.fa-bolt:before{content:"\f0e7"} .fa-sitemap:before{content:"\f0e8"} .fa-umbrella:before{content:"\f0e9"} .fa-paste:before,.fa-clipboard:before{content:"\f0ea"} .fa-lightbulb-o:before{content:"\f0eb"} .fa-exchange:before{content:"\f0ec"} .fa-cloud-download:before{content:"\f0ed"} .fa-cloud-upload:before{content:"\f0ee"} .fa-user-md:before{content:"\f0f0"} .fa-stethoscope:before{content:"\f0f1"} .fa-suitcase:before{content:"\f0f2"} .fa-bell-o:before{content:"\f0a2"} .fa-coffee:before{content:"\f0f4"} .fa-cutlery:before{content:"\f0f5"} .fa-file-text-o:before{content:"\f0f6"} .fa-building-o:before{content:"\f0f7"} .fa-hospital-o:before{content:"\f0f8"} .fa-ambulance:before{content:"\f0f9"} .fa-medkit:before{content:"\f0fa"} .fa-fighter-jet:before{content:"\f0fb"} .fa-beer:before{content:"\f0fc"} .fa-h-square:before{content:"\f0fd"} .fa-plus-square:before{content:"\f0fe"} .fa-angle-double-left:before{content:"\f100"} .fa-angle-double-right:before{content:"\f101"} .fa-angle-double-up:before{content:"\f102"} .fa-angle-double-down:before{content:"\f103"} .fa-angle-left:before{content:"\f104"} .fa-angle-right:before{content:"\f105"} .fa-angle-up:before{content:"\f106"} .fa-angle-down:before{content:"\f107"} .fa-desktop:before{content:"\f108"} .fa-laptop:before{content:"\f109"} .fa-tablet:before{content:"\f10a"} .fa-mobile-phone:before,.fa-mobile:before{content:"\f10b"} .fa-circle-o:before{content:"\f10c"} .fa-quote-left:before{content:"\f10d"} .fa-quote-right:before{content:"\f10e"} .fa-spinner:before{content:"\f110"} .fa-circle:before{content:"\f111"} .fa-mail-reply:before,.fa-reply:before{content:"\f112"} .fa-github-alt:before{content:"\f113"} .fa-folder-o:before{content:"\f114"} .fa-folder-open-o:before{content:"\f115"} .fa-smile-o:before{content:"\f118"} .fa-frown-o:before{content:"\f119"} .fa-meh-o:before{content:"\f11a"} .fa-gamepad:before{content:"\f11b"} .fa-keyboard-o:before{content:"\f11c"} .fa-flag-o:before{content:"\f11d"} .fa-flag-checkered:before{content:"\f11e"} .fa-terminal:before{content:"\f120"} .fa-code:before{content:"\f121"} .fa-mail-reply-all:before,.fa-reply-all:before{content:"\f122"} .fa-star-half-empty:before,.fa-star-half-full:before,.fa-star-half-o:before{content:"\f123"} .fa-location-arrow:before{content:"\f124"} .fa-crop:before{content:"\f125"} .fa-code-fork:before{content:"\f126"} .fa-unlink:before,.fa-chain-broken:before{content:"\f127"} .fa-question:before{content:"\f128"} .fa-info:before{content:"\f129"} .fa-exclamation:before{content:"\f12a"} .fa-superscript:before{content:"\f12b"} .fa-subscript:before{content:"\f12c"} .fa-eraser:before{content:"\f12d"} .fa-puzzle-piece:before{content:"\f12e"} .fa-microphone:before{content:"\f130"} .fa-microphone-slash:before{content:"\f131"} .fa-shield:before{content:"\f132"} .fa-calendar-o:before{content:"\f133"} .fa-fire-extinguisher:before{content:"\f134"} .fa-rocket:before{content:"\f135"} .fa-maxcdn:before{content:"\f136"} .fa-chevron-circle-left:before{content:"\f137"} .fa-chevron-circle-right:before{content:"\f138"} .fa-chevron-circle-up:before{content:"\f139"} .fa-chevron-circle-down:before{content:"\f13a"} .fa-html5:before{content:"\f13b"} .fa-css3:before{content:"\f13c"} .fa-anchor:before{content:"\f13d"} .fa-unlock-alt:before{content:"\f13e"} .fa-bullseye:before{content:"\f140"} .fa-ellipsis-h:before{content:"\f141"} .fa-ellipsis-v:before{content:"\f142"} .fa-rss-square:before{content:"\f143"} .fa-play-circle:before{content:"\f144"} .fa-ticket:before{content:"\f145"} .fa-minus-square:before{content:"\f146"} .fa-minus-square-o:before{content:"\f147"} .fa-level-up:before{content:"\f148"} .fa-level-down:before{content:"\f149"} .fa-check-square:before{content:"\f14a"} .fa-pencil-square:before{content:"\f14b"} .fa-external-link-square:before{content:"\f14c"} .fa-share-square:before{content:"\f14d"} .fa-compass:before{content:"\f14e"} .fa-toggle-down:before,.fa-caret-square-o-down:before{content:"\f150"} .fa-toggle-up:before,.fa-caret-square-o-up:before{content:"\f151"} .fa-toggle-right:before,.fa-caret-square-o-right:before{content:"\f152"} .fa-euro:before,.fa-eur:before{content:"\f153"} .fa-gbp:before{content:"\f154"} .fa-dollar:before,.fa-usd:before{content:"\f155"} .fa-rupee:before,.fa-inr:before{content:"\f156"} .fa-cny:before,.fa-rmb:before,.fa-yen:before,.fa-jpy:before{content:"\f157"} .fa-ruble:before,.fa-rouble:before,.fa-rub:before{content:"\f158"} .fa-won:before,.fa-krw:before{content:"\f159"} .fa-bitcoin:before,.fa-btc:before{content:"\f15a"} .fa-file:before{content:"\f15b"} .fa-file-text:before{content:"\f15c"} .fa-sort-alpha-asc:before{content:"\f15d"} .fa-sort-alpha-desc:before{content:"\f15e"} .fa-sort-amount-asc:before{content:"\f160"} .fa-sort-amount-desc:before{content:"\f161"} .fa-sort-numeric-asc:before{content:"\f162"} .fa-sort-numeric-desc:before{content:"\f163"} .fa-thumbs-up:before{content:"\f164"} .fa-thumbs-down:before{content:"\f165"} .fa-youtube-square:before{content:"\f166"} .fa-youtube:before{content:"\f167"} .fa-xing:before{content:"\f168"} .fa-xing-square:before{content:"\f169"} .fa-youtube-play:before{content:"\f16a"} .fa-dropbox:before{content:"\f16b"} .fa-stack-overflow:before{content:"\f16c"} .fa-instagram:before{content:"\f16d"} .fa-flickr:before{content:"\f16e"} .fa-adn:before{content:"\f170"} .fa-bitbucket:before{content:"\f171"} .fa-bitbucket-square:before{content:"\f172"} .fa-tumblr:before{content:"\f173"} .fa-tumblr-square:before{content:"\f174"} .fa-long-arrow-down:before{content:"\f175"} .fa-long-arrow-up:before{content:"\f176"} .fa-long-arrow-left:before{content:"\f177"} .fa-long-arrow-right:before{content:"\f178"} .fa-apple:before{content:"\f179"} .fa-windows:before{content:"\f17a"} .fa-android:before{content:"\f17b"} .fa-linux:before{content:"\f17c"} .fa-dribbble:before{content:"\f17d"} .fa-skype:before{content:"\f17e"} .fa-foursquare:before{content:"\f180"} .fa-trello:before{content:"\f181"} .fa-female:before{content:"\f182"} .fa-male:before{content:"\f183"} .fa-gittip:before{content:"\f184"} .fa-sun-o:before{content:"\f185"} .fa-moon-o:before{content:"\f186"} .fa-archive:before{content:"\f187"} .fa-bug:before{content:"\f188"} .fa-vk:before{content:"\f189"} .fa-weibo:before{content:"\f18a"} .fa-renren:before{content:"\f18b"} .fa-pagelines:before{content:"\f18c"} .fa-stack-exchange:before{content:"\f18d"} .fa-arrow-circle-o-right:before{content:"\f18e"} .fa-arrow-circle-o-left:before{content:"\f190"} .fa-toggle-left:before,.fa-caret-square-o-left:before{content:"\f191"} .fa-dot-circle-o:before{content:"\f192"} .fa-wheelchair:before{content:"\f193"} .fa-vimeo-square:before{content:"\f194"} .fa-turkish-lira:before,.fa-try:before{content:"\f195"} .fa-plus-square-o:before{content:"\f196"} .fa-space-shuttle:before{content:"\f197"} .fa-slack:before{content:"\f198"} .fa-envelope-square:before{content:"\f199"} .fa-wordpress:before{content:"\f19a"} .fa-openid:before{content:"\f19b"} .fa-institution:before,.fa-bank:before,.fa-university:before{content:"\f19c"} .fa-mortar-board:before,.fa-graduation-cap:before{content:"\f19d"} .fa-yahoo:before{content:"\f19e"} .fa-google:before{content:"\f1a0"} .fa-reddit:before{content:"\f1a1"} .fa-reddit-square:before{content:"\f1a2"} .fa-stumbleupon-circle:before{content:"\f1a3"} .fa-stumbleupon:before{content:"\f1a4"} .fa-delicious:before{content:"\f1a5"} .fa-digg:before{content:"\f1a6"} .fa-pied-piper:before{content:"\f1a7"} .fa-pied-piper-alt:before{content:"\f1a8"} .fa-drupal:before{content:"\f1a9"} .fa-joomla:before{content:"\f1aa"} .fa-language:before{content:"\f1ab"} .fa-fax:before{content:"\f1ac"} .fa-building:before{content:"\f1ad"} .fa-child:before{content:"\f1ae"} .fa-paw:before{content:"\f1b0"} .fa-spoon:before{content:"\f1b1"} .fa-cube:before{content:"\f1b2"} .fa-cubes:before{content:"\f1b3"} .fa-behance:before{content:"\f1b4"} .fa-behance-square:before{content:"\f1b5"} .fa-steam:before{content:"\f1b6"} .fa-steam-square:before{content:"\f1b7"} .fa-recycle:before{content:"\f1b8"} .fa-automobile:before,.fa-car:before{content:"\f1b9"} .fa-cab:before,.fa-taxi:before{content:"\f1ba"} .fa-tree:before{content:"\f1bb"} .fa-spotify:before{content:"\f1bc"} .fa-deviantart:before{content:"\f1bd"} .fa-soundcloud:before{content:"\f1be"} .fa-database:before{content:"\f1c0"} .fa-file-pdf-o:before{content:"\f1c1"} .fa-file-word-o:before{content:"\f1c2"} .fa-file-excel-o:before{content:"\f1c3"} .fa-file-powerpoint-o:before{content:"\f1c4"} .fa-file-photo-o:before,.fa-file-picture-o:before,.fa-file-image-o:before{content:"\f1c5"} .fa-file-zip-o:before,.fa-file-archive-o:before{content:"\f1c6"} .fa-file-sound-o:before,.fa-file-audio-o:before{content:"\f1c7"} .fa-file-movie-o:before,.fa-file-video-o:before{content:"\f1c8"} .fa-file-code-o:before{content:"\f1c9"} .fa-vine:before{content:"\f1ca"} .fa-codepen:before{content:"\f1cb"} .fa-jsfiddle:before{content:"\f1cc"} .fa-life-bouy:before,.fa-life-buoy:before,.fa-life-saver:before,.fa-support:before,.fa-life-ring:before{content:"\f1cd"} .fa-circle-o-notch:before{content:"\f1ce"} .fa-ra:before,.fa-rebel:before{content:"\f1d0"} .fa-ge:before,.fa-empire:before{content:"\f1d1"} .fa-git-square:before{content:"\f1d2"} .fa-git:before{content:"\f1d3"} .fa-hacker-news:before{content:"\f1d4"} .fa-tencent-weibo:before{content:"\f1d5"} .fa-qq:before{content:"\f1d6"} .fa-wechat:before,.fa-weixin:before{content:"\f1d7"} .fa-send:before,.fa-paper-plane:before{content:"\f1d8"} .fa-send-o:before,.fa-paper-plane-o:before{content:"\f1d9"} .fa-history:before{content:"\f1da"} .fa-circle-thin:before{content:"\f1db"} .fa-header:before{content:"\f1dc"} .fa-paragraph:before{content:"\f1dd"} .fa-sliders:before{content:"\f1de"} .fa-share-alt:before{content:"\f1e0"} .fa-share-alt-square:before{content:"\f1e1"} .fa-bomb:before{content:"\f1e2"} .fa-soccer-ball-o:before,.fa-futbol-o:before{content:"\f1e3"} .fa-tty:before{content:"\f1e4"} .fa-binoculars:before{content:"\f1e5"} .fa-plug:before{content:"\f1e6"} .fa-slideshare:before{content:"\f1e7"} .fa-twitch:before{content:"\f1e8"} .fa-yelp:before{content:"\f1e9"} .fa-newspaper-o:before{content:"\f1ea"} .fa-wifi:before{content:"\f1eb"} .fa-calculator:before{content:"\f1ec"} .fa-paypal:before{content:"\f1ed"} .fa-google-wallet:before{content:"\f1ee"} .fa-cc-visa:before{content:"\f1f0"} .fa-cc-mastercard:before{content:"\f1f1"} .fa-cc-discover:before{content:"\f1f2"} .fa-cc-amex:before{content:"\f1f3"} .fa-cc-paypal:before{content:"\f1f4"} .fa-cc-stripe:before{content:"\f1f5"} .fa-bell-slash:before{content:"\f1f6"} .fa-bell-slash-o:before{content:"\f1f7"} .fa-trash:before{content:"\f1f8"} .fa-copyright:before{content:"\f1f9"} .fa-at:before{content:"\f1fa"} .fa-eyedropper:before{content:"\f1fb"} .fa-paint-brush:before{content:"\f1fc"} .fa-birthday-cake:before{content:"\f1fd"} .fa-area-chart:before{content:"\f1fe"} .fa-pie-chart:before{content:"\f200"} .fa-line-chart:before{content:"\f201"} .fa-lastfm:before{content:"\f202"} .fa-lastfm-square:before{content:"\f203"} .fa-toggle-off:before{content:"\f204"} .fa-toggle-on:before{content:"\f205"} .fa-bicycle:before{content:"\f206"} .fa-bus:before{content:"\f207"} .fa-ioxhost:before{content:"\f208"} .fa-angellist:before{content:"\f209"} .fa-cc:before{content:"\f20a"} .fa-shekel:before,.fa-sheqel:before,.fa-ils:before{content:"\f20b"} .fa-meanpath:before{content:"\f20c"} \ No newline at end of file diff --git a/assets/css/font-awesome/fonts/FontAwesome.otf b/assets/css/font-awesome/fonts/FontAwesome.otf deleted file mode 100755 index 81c9ad94..00000000 Binary files a/assets/css/font-awesome/fonts/FontAwesome.otf and /dev/null differ diff --git a/assets/css/font-awesome/fonts/fontawesome-webfont.eot b/assets/css/font-awesome/fonts/fontawesome-webfont.eot deleted file mode 100755 index 84677bc0..00000000 Binary files a/assets/css/font-awesome/fonts/fontawesome-webfont.eot and /dev/null differ diff --git a/assets/css/font-awesome/fonts/fontawesome-webfont.svg b/assets/css/font-awesome/fonts/fontawesome-webfont.svg deleted file mode 100755 index d907b25a..00000000 --- a/assets/css/font-awesome/fonts/fontawesome-webfont.svg +++ /dev/null @@ -1,520 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/assets/css/font-awesome/fonts/fontawesome-webfont.ttf b/assets/css/font-awesome/fonts/fontawesome-webfont.ttf deleted file mode 100755 index 96a3639c..00000000 Binary files a/assets/css/font-awesome/fonts/fontawesome-webfont.ttf and /dev/null differ diff --git a/assets/css/font-awesome/fonts/fontawesome-webfont.woff b/assets/css/font-awesome/fonts/fontawesome-webfont.woff deleted file mode 100755 index 628b6a52..00000000 Binary files a/assets/css/font-awesome/fonts/fontawesome-webfont.woff and /dev/null differ diff --git a/pres/index.md b/pres/index.md index 9f37adce..3e803dd0 100644 --- a/pres/index.md +++ b/pres/index.md @@ -195,10 +195,127 @@ Deep learning models have achieved impressive predictive performance by learning Chandan is a fifth and final-year PhD student in computer science. He hopes to build on recent advances in machine-learning to improve the world of healthcare. His research focuses on how to interpret machine-learning models with the goal of ensuring that they can be reliably used when someone’s health is at stake. + +## research overviews +
4-paragraph research overview (feb 2022)

🔎 My research focuses on how we can build trustworthy machine-learning systems by making them interpretable. In my work, interpretability is grounded seriously via close collaboration with domain experts, e.g. medical doctors or cell biologists. These collaborations have given rise to useful methodology, roughly split into two areas: (1) building more effective transparent models and (2) improving the trustworthiness of black-box models. Going forward, I hope to help bridge the gap between transparent models and black-box models to improve real-world healthcare.

🌳 Whever possible, building transparent models is the most effective route towards ensuring interpretability. Transparent models are interpretable by design, including models such as (concise) decision trees, rule lists, and linear models. My work in this area was largely motivated by the problem of clinical decision-rule development. Clinical decision rules (especially those used in emergency medicine), need to be extremely transparent so they can be readily audited and used by physicians making split-second decisions. To this end, we have developed methodology for enhancing decision trees. For example, replacing the standard CART algorithm with a novel greedy algorithm for tree-sums can substantially improve predictive performance without sacrificing predictive performance. Additionally, hierarchical regularization can improve the predictions of an already fitted model without altering its interpretability. Despite their effectiveness, transparent models such as these often get overlooked in favor of black-box models; to address thie issue, we've spent a lot of time curating imodels, an open-source package for fitting state-of-the-art transparent models.

🌀 My second line of my work focuses on interpreting and improving black-box models, such as neural networks, for the cases when a transparent model simply can't predict well enough. Here, I work closely on real-world problems such as analyzing imaging data from cell biology and cosmology. Interpretability in these contexts demands more nuanced information than standard notions of "feature importance; common in the literature. As a result, we have developed methods to characterize and summarize the interactions in a neural network, particularly in transformed domains (such as the Fourier domain), where domain interpretations can be more natural. I'm particularly interested in how we can ensure that these interpretations are useful, either by using them to embed prior knowledge into a model or identify when it can be trusted.

🤝 There is a lot more work to do on bridging the gap between transparent models and black-box models in the real world. One promising avenue is distillation, whereby we can use a black-box model to build a better transparent model. For example, in one work we were able to distill state-of-the-art neural networks in cell-biology and cosmology into transparent wavelet models with <40 parameters. Despite this huge size reduction, these models actually improve prediction performance. By incorporating close domain knowledge into models and the way we approach problems, I believe interpretability can help unlock many benefits of machine-learning for improving healthcare and science.

-
\ No newline at end of file + + +
+ Read research overview (interpretable modeling) +
+

🔎 My research focuses on how we can build trustworthy machine-learning systems by making them + interpretable. In + my work, interpretability is grounded seriously via close collaboration with domain experts, e.g. + medical + doctors or cell biologists. These collaborations have given rise to useful methodology, roughly split + into two + areas: (1) building more effective transparent models and (2) improving the trustworthiness of + black-box + models. Going forward, I hope to help bridge the gap between transparent models and black-box + models to + improve real-world healthcare. +

+

🌳 Whenever possible, building transparent models is the most effective route towards ensuring + interpretability. + Transparent models are interpretable by design, including models such as (concise) decision trees, rule + lists, + and linear models. My work in this area was largely motivated by the problem of + clinical + decision-rule development. Clinical decision rules (especially those used in emergency + medicine), need + to be extremely transparent so they can be readily audited and used by physicians making split-second + decisions. + To this end, we have developed methodology for enhancing decision trees. For example, replacing the + standard + CART algorithm with a novel greedy algorithm for + tree-sums can + substantially improve predictive performance without sacrificing predictive performance. Additionally, + hierarchical regularization can improve the predictions + of + an already fitted model without altering its interpretability. Despite their effectiveness, transparent + models + such as these often get overlooked in favor of black-box models; to address this issue, we've spent + a lot of + time curating imodels, an open-source package for + fitting + state-of-the-art transparent models. +

+

🌀 My second line of my work focuses on interpreting and improving black-box models, such as + neural + networks, for + the cases when a transparent model simply can't predict well enough. Here, I work closely on + real-world + problems such as analyzing imaging data from cell biology and cosmology. Interpretability in these contexts demands + more + nuanced information than standard notions of "feature importance" common in the literature. As + a result, we + have developed methods to characterize and summarize the interactions in a neural network, particularly in transformed domains (such as the Fourier domain), where + domain interpretations can be more natural. I'm particularly interested in how we can ensure that + these + interpretations are useful, either by using them to embed prior knowledge into a model or + identify when it can be trusted.

+

🤝 There is a lot more work to do on bridging the gap between transparent models and black-box models in + the real + world. One promising avenue is distillation, whereby we can use a black-box model to build a better + transparent + model. For example, in one + work we were able to distill state-of-the-art neural networks in cell-biology and cosmology into + transparent wavelet models with <40 parameters. Despite this huge size reduction, these models + actually improve + prediction performance. By incorporating close domain knowledge into models and the way we approach + problems, I + believe interpretability can help unlock many benefits of machine-learning for improving healthcare and + science. +

+
+
+ + +
+ +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file