Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone
vladbogo.substack.com
Today’s paper introduces Phi-3-mini, a highly capable 3.8B parameter language model that can run locally on a phone, yet rivals the performance of much larger models like GPT-3.5 and Mixtral 8x7B on academic benchmarks. The key for this performance lies is in the training dataset, which is a scaled-up version of the one used for the previous Phi-2 model, consisting of heavily filtered web data and synthetic data. Additionally, Phi-3-small (7B) and Phi-3-medium (14B) are introduced for even better performance while still being relatively small in terms of number of parameters. Moreover, Phi-3-mini is small enough to be able to run it locally on phones.
Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone
Phi-3 Technical Report: A Highly Capable…
Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone
Today’s paper introduces Phi-3-mini, a highly capable 3.8B parameter language model that can run locally on a phone, yet rivals the performance of much larger models like GPT-3.5 and Mixtral 8x7B on academic benchmarks. The key for this performance lies is in the training dataset, which is a scaled-up version of the one used for the previous Phi-2 model, consisting of heavily filtered web data and synthetic data. Additionally, Phi-3-small (7B) and Phi-3-medium (14B) are introduced for even better performance while still being relatively small in terms of number of parameters. Moreover, Phi-3-mini is small enough to be able to run it locally on phones.