{"id":59478,"date":"2022-03-18T12:57:32","date_gmt":"2022-03-18T11:57:32","guid":{"rendered":"https:\/\/eiposgrados.com\/?p=59478"},"modified":"2022-04-05T17:56:47","modified_gmt":"2022-04-05T15:56:47","slug":"jax-machine-learning","status":"publish","type":"post","link":"https:\/\/eiposgrados.com\/eng\/blog-python\/jax-machine-learning\/","title":{"rendered":"JAX machine learning\u00a0"},"content":{"rendered":"<p>JAX is a new <strong>python library<\/strong> <strong>Google machine learning<\/strong>, designed for high-performance numerical computing. Its API for numeric functions is based on NumPy. Both Python and NumPy are widely used and familiar, making JAX simple, flexible, and easy to learn.<\/p>\n\n\n\n<p>JAX is defined as \u201cComposable transformations of Python+NumPy programs: diff, vectorize, JIT to GPU\/TPU (graphics processor unit\/tensor processing unit) and more.\u201d&nbsp;<\/p>\n\n\n\n<p>The library uses the <strong>Autograd function transformation<\/strong> (can automatically differentiate native Python and Numpy code) to convert one function to another, which returns the gradient of the original. Jax also offers a <strong>Function transformation JIT for jit compilation<\/strong> (<em>Just in time compilation<\/em>) of the existing functions and vmap and pmap for vectorization and parallelization, respectively.<\/p>\n\n\n\n<p>JAX is a bit <strong>faster than NumPy<\/strong> which is already quite optimized. JAX and NumPy effectively generate the same short series of BLAS and LAPACK calls executed on a CPU architecture and there is not much room for improvement over the NumPy reference implementation, it may be the case that with small arrays JAX is a little slower.<\/p>\n\n\n\n<p>While <strong>TensorFlow and Pytorch<\/strong> They can be compiled, but these compiled modes were added after their initial development and therefore have some drawbacks. In the case of TensorFlow2.0, although the mode <strong>Eager Execution<\/strong>, is the default mode, it is not 100% compatible with graphical mode, which sometimes produces a bad experience for the developer.&nbsp;<\/p>\n\n\n\n<p>Pytorch has a bad history of being forced to use less intuitive tensor formats since adopting Eager Execution.<\/p>\n\n\n\n<p>The difference between the execution modes is that <strong>Graph<\/strong> It is difficult to learn and test and is not very intuitive. However, graph execution is ideal for training large models.&nbsp;<\/p>\n\n\n\n<p>For small models, beginners and average developers, the Eager run is more suitable.&nbsp;<\/p>\n\n\n\n<h2 class=\"gb-headline gb-headline-21281d2c gb-headline-text\"><strong>JAX Advantage<\/strong><\/h2>\n\n\n\n<p>The advantage of <strong>JAX is that it was designed for both execution modes (Eager and Graph)<\/strong> from the beginning and suffers from the problems of its predecessors PyTorch and Tensorflow. These latest deep learning libraries consist of high-level APIs for advanced deep learning methods. JAX, compared to these, is a <strong>most functional library<\/strong> for arbitrary differentiable programming, it allows you to Jit compile your own Python functions into XLA-optimized kernels (XLA Accelerated Linear Algebra- is an area-specific compiler for linear algebra) using a single-function API. Compilation and automatic differentiation can be <strong>arbitrarily compose<\/strong>, so you can express sophisticated algorithms and get the <strong>maximum performance<\/strong> without having to leave Python.<\/p>\n\n\n\n<p><strong>JAX <\/strong>maybe it is currently <strong>the most advanced in terms of Machine Learning (ML)<\/strong> and promises to make machine learning programming more intuitive, structured, and cleaner. And, above all, it can <strong>replace<\/strong> with important advantages <strong>to Tensorflow and PyTorch.<\/strong><\/p>","protected":false},"excerpt":{"rendered":"<p>Do you know what are the features and advantages of using JAX Machine Learning? We tell you. <\/p>","protected":false},"author":90,"featured_media":59483,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"footnotes":""},"categories":[407,142],"tags":[],"class_list":["post-59478","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-blog-python","category-blog"],"acf":[],"_links":{"self":[{"href":"https:\/\/eiposgrados.com\/eng\/wp-json\/wp\/v2\/posts\/59478","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/eiposgrados.com\/eng\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/eiposgrados.com\/eng\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/eiposgrados.com\/eng\/wp-json\/wp\/v2\/users\/90"}],"replies":[{"embeddable":true,"href":"https:\/\/eiposgrados.com\/eng\/wp-json\/wp\/v2\/comments?post=59478"}],"version-history":[{"count":0,"href":"https:\/\/eiposgrados.com\/eng\/wp-json\/wp\/v2\/posts\/59478\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/eiposgrados.com\/eng\/wp-json\/wp\/v2\/media\/59483"}],"wp:attachment":[{"href":"https:\/\/eiposgrados.com\/eng\/wp-json\/wp\/v2\/media?parent=59478"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/eiposgrados.com\/eng\/wp-json\/wp\/v2\/categories?post=59478"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/eiposgrados.com\/eng\/wp-json\/wp\/v2\/tags?post=59478"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}