![]() We believe that Caffe is among the fastest convnet implementations available.Ĭommunity: Caffe already powers academic research projects, startup prototypes, and even large-scale industrial applications in vision, speech, and multimedia. That’s 1 ms/image for inference and 4 ms/image for learning and more recent library versions and hardware are faster still. Speed makes Caffe perfect for research experiments and industry deployment.Ĭaffe can process over 60M images per day with a single NVIDIA K40 GPU*. Thanks to these contributors the framework tracks the state-of-the-art in both code and models. In Caffe’s first year, it has been forked by over 1,000 developers and had many significant changes contributed back. Switch between CPU and GPU by setting a single flag to train on a GPU machine then deploy to commodity clusters or mobile devices.Įxtensible code fosters active development. Models and optimization are defined by configuration without hard-coding. Yangqing Jia created the project during his PhD at UC Berkeley.Ĭaffe is released under the BSD 2-Clause license.Ĭheck out our web image classification demo! Why Caffe?Įxpressive architecture encourages application and innovation. It is developed by Berkeley AI Research ( BAIR) and by community contributors. Caffe is a deep learning framework made with expression, speed, and modularity in mind. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |