Intel Optimization For Tensorflow vulnerabilities

429 known vulnerabilities affecting intel/optimization_for_tensorflow.

Total CVEs
429
CISA KEV
0
Public exploits
0
Exploited in wild
0
Severity breakdown
CRITICAL5HIGH121MEDIUM200LOW103

Vulnerabilities

Page 11 of 22
CVE-2021-41204MEDIUM≥ 2.6.0, < 2.6.1≥ 2.5.0, < 2.5.2+1 more2021-11-10
CVE-2021-41204 [MEDIUM] CWE-824 Segfault while copying constant resource tensor Segfault while copying constant resource tensor ### Impact During TensorFlow's Grappler optimizer phase, constant folding might attempt to deep copy a resource tensor. This results in a segfault, as these tensors are supposed to not change. ### Patches We have patched the issue in GitHub commit [7731e8dfbe4a56773be5dc94d631611211156659](https://github.com/tensorflow/tensorflow/commit/7731e8dfbe4a56773be5dc94d631611
ghsaosv
CVE-2021-41213MEDIUM≥ 2.6.0, < 2.6.1≥ 2.5.0, < 2.5.2+1 more2021-11-10
CVE-2021-41213 [MEDIUM] CWE-662 Deadlock in mutually recursive `tf.function` objects Deadlock in mutually recursive `tf.function` objects ### Impact The [code behind `tf.function` API](https://github.com/tensorflow/tensorflow/blob/8d72537c6abf5a44103b57b9c2e22c14f5f49698/tensorflow/python/eager/def_function.py#L542) can be made to deadlock when two `tf.function` decorated Python functions are mutually recursive: ```python import tensorflow as tf @tf.function() def fun1(num): if num == 1: retu
ghsaosv
CVE-2021-41199MEDIUM≥ 2.6.0, < 2.6.1≥ 2.5.0, < 2.5.2+1 more2021-11-10
CVE-2021-41199 [MEDIUM] CWE-190 Overflow/crash in `tf.image.resize` when size is large Overflow/crash in `tf.image.resize` when size is large ### Impact If `tf.image.resize` is called with a large input argument then the TensorFlow process will crash due to a `CHECK`-failure caused by an overflow. ```python import tensorflow as tf import numpy as np tf.keras.layers.UpSampling2D( size=1610637938, data_format='channels_first', interpolation='bilinear')(np.ones((5,1,1,1))) ``` The number of ele
ghsaosv
CVE-2021-41224MEDIUM≥ 2.6.0, < 2.6.1≥ 2.5.0, < 2.5.2+1 more2021-11-10
CVE-2021-41224 [MEDIUM] CWE-125 `SparseFillEmptyRows` heap OOB `SparseFillEmptyRows` heap OOB ### Impact The [implementation](https://github.com/tensorflow/tensorflow/blob/e71b86d47f8bc1816bf54d7bddc4170e47670b97/tensorflow/core/kernels/sparse_fill_empty_rows_op.cc#L194-L241) of `SparseFillEmptyRows` can be made to trigger a heap OOB access: ```python import tensorflow as tf data=tf.raw_ops.SparseFillEmptyRows( indices=[[0,0],[0,0],[0,0]], values=['ssssssssssssssssssssssssssssssssssssssssssss
ghsaosv
CVE-2021-41228MEDIUM≥ 2.5.0, < 2.5.2≥ 0, < 2.4.4+1 more2021-11-10
CVE-2021-41228 [MEDIUM] CWE-78 Code injection in `saved_model_cli` Code injection in `saved_model_cli` ### Impact TensorFlow's `saved_model_cli` tool is vulnerable to a code injection as it [calls `eval` on user supplied strings](https://github.com/tensorflow/tensorflow/blob/87462bfac761435a46641ff2f10ad0b6e5414a4b/tensorflow/python/tools/saved_model_cli.py#L524-L550) ```python def preprocess_input_exprs_arg_string(input_exprs_str): ... for input_raw in filter(bool, input_exprs_str.split(';'))
ghsaosv
CVE-2021-41217MEDIUM≥ 2.6.0, < 2.6.1≥ 2.5.0, < 2.5.2+1 more2021-11-10
CVE-2021-41217 [MEDIUM] CWE-476 Null pointer exception when `Exit` node is not preceded by `Enter` op Null pointer exception when `Exit` node is not preceded by `Enter` op ### Impact The [process of building the control flow graph](https://github.com/tensorflow/tensorflow/blob/8d72537c6abf5a44103b57b9c2e22c14f5f49698/tensorflow/core/common_runtime/immutable_executor_state.cc#L284-L346) for a TensorFlow model is vulnerable to a null pointer exception when nodes that should be paired are not: ``
ghsaosv
CVE-2021-41226MEDIUM≥ 2.6.0, < 2.6.1≥ 2.5.0, < 2.5.2+1 more2021-11-10
CVE-2021-41226 [MEDIUM] CWE-125 Heap OOB in `SparseBinCount` Heap OOB in `SparseBinCount` ### Impact The [implementation](https://github.com/tensorflow/tensorflow/blob/e71b86d47f8bc1816bf54d7bddc4170e47670b97/tensorflow/core/kernels/bincount_op.cc#L353-L417) of `SparseBinCount` is vulnerable to a heap OOB: ```python import tensorflow as tf tf.raw_ops.SparseBincount( indices=[[0],[1],[2]] values=[0,-10000000] dense_shape=[1,1] size=[1] weights=[3,2,1] binary_output=False) ``` This is because
ghsaosv
CVE-2021-41209MEDIUM≥ 2.6.0, < 2.6.1≥ 2.5.0, < 2.5.2+1 more2021-11-10
CVE-2021-41209 [MEDIUM] CWE-369 FPE in convolutions with zero size filters FPE in convolutions with zero size filters ### Impact The [implementations for convolution operators](https://github.com/tensorflow/tensorflow/blob/8d72537c6abf5a44103b57b9c2e22c14f5f49698/tensorflow/core/kernels/conv_ops.cc) trigger a division by 0 if passed empty filter tensor arguments. ### Patches We have patched the issue in GitHub commit [f2c3931113eaafe9ef558faaddd48e00a6606235](https://github.com/tensorflow/tens
ghsaosv
CVE-2021-41197MEDIUMCVSS 5.5≥ 2.6.0, < 2.6.1≥ 2.5.0, < 2.5.2+1 more2021-11-10
CVE-2021-41197 [MEDIUM] CWE-190 Crashes due to overflow and `CHECK`-fail in ops with large tensor shapes Crashes due to overflow and `CHECK`-fail in ops with large tensor shapes ### Impact TensorFlow allows tensor to have a large number of dimensions and each dimension can be as large as desired. However, the total number of elements in a tensor must fit within an `int64_t`. If an overflow occurs, `MultiplyWithoutOverflow` would return a negative result. In the majority of TensorFlow codebase t
ghsaosv
CVE-2021-41195MEDIUMCVSS 5.5≥ 2.6.0, < 2.6.1≥ 2.5.0, < 2.5.2+1 more2021-11-10
CVE-2021-41195 [MEDIUM] CWE-190 Crash in `tf.math.segment_*` operations Crash in `tf.math.segment_*` operations ### Impact The implementation of `tf.math.segment_*` operations results in a `CHECK`-fail related abort (and denial of service) if a segment id in `segment_ids` is large. ```python import tensorflow as tf tf.math.segment_max(data=np.ones((1,10,1)), segment_ids=[1676240524292489355]) tf.math.segment_min(data=np.ones((1,10,1)), segment_ids=[1676240524292489355]) tf.math.segment_mean(d
ghsaosv
CVE-2021-41198MEDIUM≥ 2.6.0, < 2.6.1≥ 2.5.0, < 2.5.2+1 more2021-11-10
CVE-2021-41198 [MEDIUM] CWE-190 Overflow/crash in `tf.tile` when tiling tensor is large Overflow/crash in `tf.tile` when tiling tensor is large ### Impact If `tf.tile` is called with a large input argument then the TensorFlow process will crash due to a `CHECK`-failure caused by an overflow. ```python import tensorflow as tf import numpy as np tf.keras.backend.tile(x=np.ones((1,1,1)), n=[100000000,100000000, 100000000]) ``` The number of elements in the output tensor is too much for the `int6
ghsaosv
CVE-2021-41200MEDIUM≥ 2.6.0, < 2.6.1≥ 2.5.0, < 2.5.2+1 more2021-11-10
CVE-2021-41200 [MEDIUM] CWE-617 Incomplete validation in `tf.summary.create_file_writer` Incomplete validation in `tf.summary.create_file_writer` ### Impact If `tf.summary.create_file_writer` is called with non-scalar arguments code crashes due to a `CHECK`-fail. ```python import tensorflow as tf import numpy as np tf.summary.create_file_writer(logdir='', flush_millis=np.ones((1,2))) ``` ### Patches We have patched the issue in GitHub commit [874bda09e6702cd50bac90b453b50bcc65b2769e](https://
ghsaosv
CVE-2021-41215MEDIUM≥ 2.6.0, < 2.6.1≥ 2.5.0, < 2.5.2+1 more2021-11-10
CVE-2021-41215 [MEDIUM] CWE-476 Null pointer exception in `DeserializeSparse` Null pointer exception in `DeserializeSparse` ### Impact The [shape inference code for `DeserializeSparse`](https://github.com/tensorflow/tensorflow/blob/8d72537c6abf5a44103b57b9c2e22c14f5f49698/tensorflow/core/ops/sparse_ops.cc#L152-L168) can trigger a null pointer dereference: ```python import tensorflow as tf dataset = tf.data.Dataset.range(3) @tf.function def test(): y = tf.raw_ops.DeserializeSparse( serialized
ghsaosv
CVE-2021-41202MEDIUM≥ 2.6.0, < 2.6.1≥ 2.5.0, < 2.5.2+1 more2021-11-10
CVE-2021-41202 [MEDIUM] CWE-681 Overflow/crash in `tf.range` Overflow/crash in `tf.range` ### Impact While calculating the size of the output within the `tf.range` kernel, there is a conditional statement of type `int64 = condition ? int64 : double`. Due to C++ implicit conversion rules, both branches of the condition will be cast to `double` and the result would be truncated before the assignment. This result in overflows: ```python import tensorflow as tf tf.sparse.eye(num_rows=922337203685
ghsaosv
CVE-2021-41227MEDIUM≥ 2.6.0, < 2.6.1≥ 2.5.0, < 2.5.2+1 more2021-11-10
CVE-2021-41227 [MEDIUM] CWE-125 Arbitrary memory read in `ImmutableConst` Arbitrary memory read in `ImmutableConst` ### Impact The `ImmutableConst` operation in TensorFlow can be tricked into reading arbitrary memory contents: ```python import tensorflow as tf with open('/tmp/test','wb') as f: f.write(b'\xe2'*128) data = tf.raw_ops.ImmutableConst(dtype=tf.string,shape=3,memory_region_name='/tmp/test') print(data) ``` This is because the `tstring` TensorFlow string class has a special case f
ghsaosv
CVE-2021-41222MEDIUM≥ 2.6.0, < 2.6.1≥ 2.5.0, < 2.5.2+1 more2021-11-10
CVE-2021-41222 [MEDIUM] CWE-682 Segfault due to negative splits in `SplitV` Segfault due to negative splits in `SplitV` ### Impact The [implementation](https://github.com/tensorflow/tensorflow/blob/e71b86d47f8bc1816bf54d7bddc4170e47670b97/tensorflow/core/kernels/split_v_op.cc#L49-L205) of `SplitV` can trigger a segfault is an attacker supplies negative arguments: ```python import tensorflow as tf tf.raw_ops.SplitV( value=tf.constant([]), size_splits=[-1, -2] ,axis=0, num_split=2) ``` This oc
ghsaosv
CVE-2021-41223MEDIUM≥ 2.6.0, < 2.6.1≥ 2.5.0, < 2.5.2+1 more2021-11-10
CVE-2021-41223 [MEDIUM] CWE-125 Heap OOB in `FusedBatchNorm` kernels Heap OOB in `FusedBatchNorm` kernels ### Impact The [implementation](https://github.com/tensorflow/tensorflow/blob/e71b86d47f8bc1816bf54d7bddc4170e47670b97/tensorflow/core/kernels/fused_batch_norm_op.cc#L1292) of `FusedBatchNorm` kernels is vulnerable to a heap OOB: ```python import tensorflow as tf tf.raw_ops.FusedBatchNormGrad( y_backprop=tf.constant([i for i in range(9)],shape=(1,1,3,3),dtype=tf.float32) x=tf.constant([i
ghsaosv
CVE-2021-41196MEDIUM≥ 2.6.0, < 2.6.1≥ 2.5.0, < 2.5.2+1 more2021-11-10
CVE-2021-41196 [MEDIUM] CWE-191 Crash in `max_pool3d` when size argument is 0 or negative Crash in `max_pool3d` when size argument is 0 or negative ### Impact The Keras pooling layers can trigger a segfault if the size of the pool is 0 or if a dimension is negative: ```python import tensorflow as tf pool_size = [2, 2, 0] layer = tf.keras.layers.MaxPooling3D(strides=1, pool_size=pool_size) input_tensor = tf.random.uniform([3, 4, 10, 11, 12], dtype=tf.float32) res = layer(input_tensor) ``` Thi
ghsaosv
CVE-2021-41218MEDIUM≥ 2.6.0, < 2.6.1≥ 2.5.0, < 2.5.2+1 more2021-11-10
CVE-2021-41218 [MEDIUM] CWE-369 Integer division by 0 in `tf.raw_ops.AllToAll` Integer division by 0 in `tf.raw_ops.AllToAll` ### Impact The [shape inference code for `AllToAll`](https://github.com/tensorflow/tensorflow/blob/8d72537c6abf5a44103b57b9c2e22c14f5f49698/tensorflow/core/ops/tpu_cross_replica_ops.cc#L25-L74) can be made to execute a division by 0: ```python import tensorflow as tf @tf.function def func(): return tf.raw_ops.AllToAll( input=[0.0, 0.1652, 0.6543], group_assignment=[1,
ghsaosv
CVE-2021-41207MEDIUM≥ 2.6.0, < 2.6.1≥ 2.5.0, < 2.5.2+1 more2021-11-10
CVE-2021-41207 [MEDIUM] CWE-369 FPE in `ParallelConcat` FPE in `ParallelConcat` ### Impact The [implementation of `ParallelConcat`](https://github.com/tensorflow/tensorflow/blob/8d72537c6abf5a44103b57b9c2e22c14f5f49698/tensorflow/core/kernels/inplace_ops.cc#L72-L97) misses some input validation and can produce a division by 0: ```python import tensorflow as tf @tf.function def test(): y = tf.raw_ops.ParallelConcat(values=[['tf']],shape=0) return y test() ``` ### Patches We have patched the
ghsaosv