Skip to content

Commit

Permalink
Fix Keras Deserialization Issue of Lambda Layers (#1541)
Browse files Browse the repository at this point in the history
Signed-off-by: zehao-intel <[email protected]>
  • Loading branch information
zehao-intel authored Jan 17, 2024
1 parent 09eb5dd commit 9ec22c9
Show file tree
Hide file tree
Showing 4 changed files with 17 additions and 3 deletions.
2 changes: 1 addition & 1 deletion .azure-pipelines/scripts/codeScan/pylint/pylint.sh
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ elif [ "${scan_module}" = "neural_insights" ]; then
fi

python -m pylint -f json --disable=R,C,W,E1129 --enable=line-too-long --max-line-length=120 --extension-pkg-whitelist=numpy --ignored-classes=TensorProto,NodeProto \
--ignored-modules=tensorflow,torch,torch.quantization,torch.tensor,torchvision,fairseq,mxnet,onnx,onnxruntime,intel_extension_for_pytorch,intel_extension_for_tensorflow,torchinfo,horovod,transformers \
--ignored-modules=tensorflow,keras,torch,torch.quantization,torch.tensor,torchvision,fairseq,mxnet,onnx,onnxruntime,intel_extension_for_pytorch,intel_extension_for_tensorflow,torchinfo,horovod,transformers \
/neural-compressor/${scan_module} > $log_dir/pylint.json

exit_code=$?
Expand Down
2 changes: 0 additions & 2 deletions docs/source/releases_info.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,6 @@ The MSE tuning strategy does not work with the PyTorch adaptor layer. This strat

The diagnosis function does not work with ONNX Runtime 1.13.1 for QDQ format quantization of ONNX models. It can not dump the output value of QDQ pairs since framework limitation.

Keras version 2.13.0 is experiencing an open issue [18284](https://github.com/keras-team/keras/issues/18284) related to the absence of a `safe_mode` parameter in `tf.keras.models.model_from_json()`. This deficiency could potentially hinder the successful quantization of certain Keras models.

## Incompatible Changes

[Neural Compressor v1.2](https://github.com/intel/neural-compressor/tree/v1.2) introduces incompatible changes in user facing APIs. Please refer to [incompatible changes](incompatible_changes.md) to know which incompatible changes are made in v1.2.
Expand Down
8 changes: 8 additions & 0 deletions neural_compressor/adaptor/keras.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,7 @@
from .query import QueryBackendCapability

tf = LazyImport("tensorflow")
keras = LazyImport("keras")


def _add_supported_quantized_objects(custom_objects):
Expand Down Expand Up @@ -519,6 +520,13 @@ def _check_quantize_mode(self, json_model):
def _restore_model_from_json(self, json_model):
from tensorflow.keras.models import model_from_json

from neural_compressor.utils.utility import version1_gte_version2

if version1_gte_version2(keras.__version__, "2.13.1"):
from keras.src.saving import serialization_lib

serialization_lib.enable_unsafe_deserialization()

custom_objects = {}
# We need to keep a dictionary of custom objects as our quantized library
# is not recognized by keras.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@
import os
from collections import OrderedDict, UserDict

import keras
import numpy as np
import tensorflow as tf
import yaml
Expand Down Expand Up @@ -486,6 +487,13 @@ def _restore_model_from_json(self, json_model):
"""Generate a keras model from json files."""
from tensorflow.keras.models import model_from_json

from neural_compressor.tensorflow.utils import version1_gte_version2

if version1_gte_version2(keras.__version__, "2.13.1"):
from keras.src.saving import serialization_lib

serialization_lib.enable_unsafe_deserialization()

custom_objects = {}
# We need to keep a dictionary of custom objects as our quantized library
# is not recognized by keras.
Expand Down

0 comments on commit 9ec22c9

Please sign in to comment.