نسخه ی 1.13 تنسورفلو منتشر شد

نسخه ی 1.13.0 تنسور فلو

نسخه ی 1.13.0 تنسورفلو منتشر شد. هم اکنون که تنها 22 ساعت از انتشار این نسخه می گذرد ما شما رو از تغییرات این نسخه با خبر میکنیم. تنسورفلو (TensorFlow) کتابخانه ی متن‌بازی برای یادگیری ماشین است. هم اکنون توسط ۵۰ تیم تحقیقاتی و شرکت گوگل برای شناسایی گفتار، Gmail، تصاویر گوگل و در جستجوگر گوگل استفاده می‌شود. با کلیک بر روی لینک گیت هاب می توانید این نسخه را دانلود کنید یا به صورت مستقیم از آرشیو گیت هاب به آن دسترسی داشته باشید.

به روز رسانی خبر:

  • نسخه ی 1.13.0، کمتر از 24 ساعت به دلایل نامعلومی از گیت هاب حذف شد.
  • نسخه ی TensorFlow 1.13.1 در 2:54 (Tuesday, February 26, 2019 (GMT+3:30 منتشر شد، حذف ناگهانی نسخه ی 1.13.0 جالب بود، بخاطر همین ما تغییرات این نسخه را در همین صفحه دنبال می کنیم.
تنسور فلو
تنسور فلو

ویژگی های مهم و پیشرفت های تنسور فلو 1.13.0

  • TensorFlow Lite has moved from contrib to core. This means that Python modules are under tf.lite and source code is now under tensorflow/lite rather than tensorflow/contrib/lite.
  • TensorFlow GPU binaries are now built against CUDA 10 and TensorRT 5.0.
  • Support for Python3.7 on all operating systems.
  • Moved NCCL to core.

تغییرات رفتاری

  • Disallow conversion of python floating types to uint32/64 (matching behavior of other integer types) in tf.constant.
  • Make the gain argument of convolutional orthogonal initializers have consistent behavior with the tf.initializers.orthogonal initializer, i.e. scale the output l2-norm by gainand NOT by sqrt(gain). (Note that these functions are currently in tf.contrib which is not guaranteed backward compatible).
    •  

رفع اشکال و سایر تغییرات

اسناد

  • Update the doc with the details about the rounding mode used in quantize_and_dequantize_v2.
  • Clarify that tensorflow::port::InitMain() should be called before using the TensorFlow library. Programs failing to do this are not portable to all platforms.

فراموشی و تغییر نام نمادها

  • Removing deprecations for endpoints
  • Deprecate tf.data.Dataset.shard.
  • Deprecate saved_model.loader.load which is replaced by saved_model.load and saved_model.main_op, which will be replaced by saved_model.main_op in V2.
  • Deprecate tf.QUANTIZED_DTYPES. The official new symbol is tf.dtypes.QUANTIZED_DTYPES.
  • Update sklearn imports for deprecated packages.
  • Deprecate Variable.count_up_to and tf.count_up_to in favor of Dataset.range.
  • Export confusion_matrix op as tf.math.confusion_matrixinstead of tf.train.confusion_matrix.
  • Add tf.dtypes. endpoint for every constant in dtypes.py; moving endpoints in versions.py to corresponding endpoints in tf.sysconfig. and tf.version.; moving all constants under tf.saved_model submodules to tf.saved_model module. New endpoints are added in V1 and V2 but existing endpoint removals are only applied in V2.
  • Deprecates behavior where device assignment overrides collocation constraints inside a collocation context manager.

رابط برنامه نویسی کاربردی کراس و پایتون

  • Add to Keras functionality analogous to tf.register_tensor_conversion_function.
  • Subclassed Keras models can now be saved through tf.contrib.saved_model.save_keras_model.
  • LinearOperator.matmul now returns a new LinearOperator.
  •  

عملیات جدید و بهبود عملکرد عملیات

  • Add a Nearest Neighbor Resize op.
  • Add an ignore_unknown argument to parse_values which suppresses ValueError for unknown hyperparameter types. Such * Add tf.linalg.matvec convenience function.
  • tf.einsum()raises ValueError for unsupported equations like "ii->".
  • Add DCT-I and IDCT-I in tf.signal.dct and tf.signal.idct.
  • Add LU decomposition op.
  • Add quantile loss to gradient boosted trees in estimator.
  • Add round_mode to QuantizeAndDequantizeV2 op to select rounding algorithm.
  • Add unicode_encodeunicode_decodeunicode_decode_with_offsetsunicode_splitunicode_split_with_offset, and unicode_transcode ops. Amongst other things, this Op adds the ability to encode, decode, and transcode a variety of input text encoding formats into the main Unicode encodings (UTF-8, UTF-16-BE, UTF-32-BE)
  • Add “unit” attribute to the substr op, which allows obtaining the substring of a string containing unicode characters.
  • Broadcasting support for Ragged Tensors.
  • SpaceToDepth supports uint8 data type.
  • Support multi-label quantile regression in estimator.
  • We now use “div” as the default partition_strategy in tf.nn.safe_embedding_lookup_sparsetf.nn.sampled_softmax and tf.nn.nce_loss.
    hyperparameter are ignored.

کارایی

  • Improve performance of GPU cumsum/cumprod by up to 300x.
  • Added support for weight decay in most TPU embedding optimizers, including AdamW and MomentumW.

توسعه TensorFlow 2.0 

  • Add a command line tool to convert to TF2.0, tf_upgrade_v2
  • Merge tf.spectral into tf.signal for TensorFlow 2.0.
  • Change the default recurrent activation function for LSTM from ‘hard_sigmoid’ to ‘sigmoid’ in 2.0. Historically recurrent activation is ‘hard_sigmoid’ since it is fast than ‘sigmoid’. With new unified backend between CPU and GPU mode, since the CuDNN kernel is using sigmoid, we change the default for CPU mode to sigmoid as well. With that, the default LSTM will be compatible with both CPU and GPU kernel. This will enable user with GPU to use CuDNN kernel by default and get a 10x performance boost in training. Note that this is checkpoint breaking change. If user want to use their 1.x pre-trained checkpoint, please construct the layer with LSTM(recurrent_activation=’hard_sigmoid’) to fallback to 1.x behavior.

نسخه TensorFlow Lite

  • Move from tensorflow/contrib/lite to tensorflow/lite.
  • Add experimental Java API for injecting TensorFlow Lite delegates
  • Add support for strings in TensorFlow Lite Java API.

بخش tf.contrib

  • Add Apache Ignite Filesystem plugin to support accessing Apache IGFS.
  • Dropout now takes rate argument, keep_prob is deprecated.
  • Estimator occurrences references tf.contrib.estimatorwere changed to tf.estimator:
    • tf.contrib.estimator.BaselineEstimator with tf.estimator.BaselineEstimator
    • tf.contrib.estimator.DNNLinearCombinedEstimatorwith tf.estimator.DNNLinearCombinedEstimator
    • tf.contrib.estimator.DNNEstimator with tf.estimator.DNNEstimator
    • tf.contrib.estimator.LinearEstimator with tf.estimator.LinearEstimator
    • tf.contrib.estimator.InMemoryEvaluatorHook and tf.estimator.experimental.InMemoryEvaluatorHook`.
    • tf.contrib.estimator.make_stop_at_checkpoint_step_hook 
    • with tf.estimator.experimental.make_stop_at_checkpoint_step_hook.
  • Expose `tf.distribute.Strategy as the new name for tf.contrib.distribute.DistributionStrategy.
  • Migrate linear optimizer from contrib to core.
  • Move tf.contrib.signal to tf.signal (preserving aliases in tf.contrib.signal).
  • Users of tf.contrib.estimator.export_all_saved_modelsand related should switch to tf.estimator.Estimator.experimental_export_all_saved_models.

نسخه tf.data

  • Add tf.data.experimental.StatsOptions(), to configure options to collect statistics from tf.data.Dataset pipeline using StatsAggregator. Add nested option, experimental_stats (which takes a tf.data.experimen tal.StatsOptions object), to tf.data.Options. Deprecates tf.data.experimental.set_stats_agregator.
  • Performance optimizations:
    • Add tf.data.experimental.OptimizationOptions(), to configure options to enable tf.data performance optimizations. Add nested option, experimental_optimization (which takes a tf.data.experimental.OptimizationOptions object), to tf.data.Options. Remove performance optimization options from tf.data.Options, and add them under tf.data.experimental.OptimizationOptions instead.
    • Enable map_and_batch_fusion and noop_eliminationoptimizations by default. They can be disabled by configuring tf.data.experimental.OptimizationOptionsto set map_and_batch = False or noop_elimination = False respectively. To disable all default optimizations, set apply_default_optimizations = False.
    • Support parallel map in map_and_filter_fusion.
    • Disable static optimizations for input pipelines that use non-resource tf.Variables.
  • Add NUMA-aware MapAndBatch dataset.
  • Deprecate tf.data.Dataset.make_one_shot_iterator() in V1, removed it from V2, and added tf.compat.v1.data.make_one_shot_iterator()`.
  • Deprecate tf.data.Dataset.make_initializable_iterator()in V1, removed it from V2, and added tf.compat.v1.data.make_initializable_iterator().
  • Enable nested dataset support in core tf.datatransformations.
  • For tf.data.Dataset implementers: Added tf.data.Dataset._element_structured property to replace Dataset.output_{types,shapes,classes}.
  • Make num_parallel_calls of tf.data.Dataset.interleaveand tf.data.Dataset.map work in Eager mode.

زنجیره ابزارها یا Toolchainها

  • Fixed OpenSSL compatibility by avoiding EVP_MD_CTX_destroy.
  • Added bounds checking to printing deprecation warnings.
  • Upgraded CUDA dependency to 10.0
  • To build with Android NDK r14b, add “#include <linux/compiler.h>” to android-ndk-r14b/platforms/android-14/arch-*/usr/include/linux/futex.h
  • Removed :android_tensorflow_lib_selective_registration* targets, use :android_tensorflow_lib_lite* targets instead.

XLA

  • Move RoundToEven function to xla/client/lib/math.h.
  • A new environment variable TF_XLA_DEBUG_OPTIONS_PASSTHROUGH set to “1” or “true” allows the debug options passed within an XRTCompile op to be passed directly to the XLA compilation backend. If such variable is not set (service side), only a restricted set will be passed through.
  • Allow the XRTCompile op to return the ProgramShape resulted form the XLA compilation as a second return argument.
  • XLA HLO graphs can now be rendered as SVG/HTML.

تخمین گر

  • Replace all occurences of tf.contrib.estimator.BaselineEstimator with tf.estimator.BaselineEstimator
  • Replace all occurences of tf.contrib.estimator.DNNLinearCombinedEstimator with tf.estimator.DNNLinearCombinedEstimator
  • Replace all occurrences of tf.contrib.estimator.DNNEstimator with tf.estimator.DNNEstimator
  • Replace all occurrences of tf.contrib.estimator.LinearEstimator with tf.estimator.LinearEstimator
  • Users of tf.contrib.estimator.export_all_saved_modelsand related should switch to tf.estimator.Estimator.experimental_export_all_saved_models.
  • Update regression_head to the new Head API for Canned Estimator V2.
  • Switch multi_class_head to Head API for Canned Estimator V2.
  • Replace all occurences of tf.contrib.estimator.InMemoryEvaluatorHook 
  • and tf.contrib.estimator.make_stop_at_checkpoint_step_hook
  • with tf.estimator.experimental.InMemoryEvaluatorHook and tf.estimator.experimental.make_stop_at_checkpoint_step_hook
  • Migrate linear optimizer from contrib to core.
0 پاسخ

دیدگاه خود را ثبت کنید

تمایل دارید در گفتگوها شرکت کنید؟
در گفتگو ها شرکت کنید.

دیدگاهتان را بنویسید

نشانی ایمیل شما منتشر نخواهد شد. بخش‌های موردنیاز علامت‌گذاری شده‌اند *