Conversation
Verified for entire range of classification nets Quantization is disabled at the moment There exists few unspoorted ops in convertion maps which is need to be mapped in future when relax op inventory grows.
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the TFLite frontend for TVM's Relax framework by rebasing its implementation and expanding its capabilities. It introduces the fundamental functionality to convert TFLite models into Relax graphs, supporting a broader array of operations and model architectures. A crucial addition is the Flexbuffer decoder, which facilitates the interpretation of custom options embedded within TFLite models. These changes are rigorously validated through a comprehensive test suite, including end-to-end verification for various Keras classification networks, ensuring the accuracy and robustness of the model conversion process. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a TFLite frontend for Relax, including a flexbuffer parser and extensive tests. The changes look good overall, but there are some critical issues in the flexbuffer parser that could lead to incorrect behavior or crashes when handling different data types and byte widths. I've identified a couple of bugs in tflite_flexbuffer.py related to handling byte widths and data types during deserialization, and a minor issue with exception handling.
Note: Security Review is unavailable for this PR.
| unpack_str = "" | ||
| if byte_width == 1: | ||
| unpack_str = "<B" | ||
| elif byte_width == 4: | ||
| unpack_str = "<i" | ||
| assert unpack_str != "" |
There was a problem hiding this comment.
The indirect_jump function does not correctly handle all possible byte widths for offsets and uses a signed integer format for 4-byte offsets, which is incorrect for flexbuffer offsets. Flexbuffer offsets are unsigned and can have byte widths of 1, 2, 4, or 8. This implementation only supports 1 and 4-byte widths and incorrectly uses a signed format (<i) for 4-byte offsets. This can lead to incorrect parsing of flexbuffers.
unpack_map = {1: "<B", 2: "<H", 4: "<I", 8: "<Q"}
if byte_width not in unpack_map:
raise NotImplementedError(f"Unsupported byte width for indirect jump: {byte_width}")
unpack_str = unpack_map[byte_width]| def decode_map(self, end, byte_width, parent_byte_width): | ||
| """Decodes the flexbuffer map and returns a dict""" | ||
| mid_loc = self.indirect_jump(end, parent_byte_width) | ||
| map_size = struct.unpack("<i", self.buffer[mid_loc - byte_width : mid_loc])[0] |
There was a problem hiding this comment.
The size of the map is hardcoded to be parsed as a 4-byte signed integer (<i), but the byte_width can vary (1, 2, 4, or 8). Additionally, map sizes should be unsigned integers. This will cause incorrect parsing for flexbuffers that use different byte widths for map sizes.
You can fix this by determining the unpack format based on byte_width:
unpack_map = {1: "<B", 2: "<H", 4: "<I", 8: "<Q"}
if byte_width not in unpack_map:
raise NotImplementedError(f"Unsupported byte width for map size: {byte_width}")
unpack_str = unpack_map[byte_width]
map_size = struct.unpack(unpack_str, self.buffer[mid_loc - byte_width : mid_loc])[0]| elif value_type == FlexBufferType.FBT_FLOAT: | ||
| value = struct.unpack("<f", value_bytes)[0] | ||
| else: | ||
| raise Exception |
There was a problem hiding this comment.
Using a generic Exception is not recommended as it can obscure the actual error. It's better to use a more specific exception type, like NotImplementedError, to provide more context about what went wrong. This helps with debugging and error handling.
raise NotImplementedError(f"Flexbuffer type {value_type} is not supported for decoding.")
Verified for entire range of classification nets
Quantization is disabled at the moment
There exists few unsupported ops in conversion maps which is need to be mapped in future when relax op inventory grows.