Skip to content

chore: improve test coverage#13

Merged
shaonianche merged 2 commits intomainfrom
improve-test-coverage
Jan 29, 2026
Merged

chore: improve test coverage#13
shaonianche merged 2 commits intomainfrom
improve-test-coverage

Conversation

@shaonianche
Copy link
Copy Markdown
Owner

No description provided.

Signed-off-by: friendsa <github@duanfei.org>
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @shaonianche, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly improves the test coverage of the fit_tool library by adding new, dedicated unit tests for fundamental data type definitions, utility conversion functions, and field component logic. Concurrently, the project's test coverage configuration has been adjusted to provide a more accurate representation of code coverage by omitting automatically generated and profile-related files from the reports.

Highlights

  • New Test Files Added: Comprehensive unit tests have been introduced for BaseType and FieldType in fit_tool/tests/test_base_type.py, for epoch and coordinate conversion utilities in fit_tool/tests/test_conversions.py, and for the FieldComponent class in fit_tool/tests/test_field_component.py.
  • Coverage Configuration Update: The pyproject.toml file has been updated to refine test coverage reporting by excluding generated files (fit_tool/gen/*), profile messages (fit_tool/profile/messages/*), and fit_tool/profile/profile_type.py from coverage calculations.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@codecov
Copy link
Copy Markdown

codecov bot commented Jan 28, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
⚠️ Please upload report for BASE (main@430c1be). Learn more about missing BASE report.
⚠️ Report is 1 commits behind head on main.
✅ All tests successful. No failed tests found.

Additional details and impacted files
@@           Coverage Diff           @@
##             main      #13   +/-   ##
=======================================
  Coverage        ?   89.29%           
=======================================
  Files           ?       20           
  Lines           ?     1364           
  Branches        ?        0           
=======================================
  Hits            ?     1218           
  Misses          ?      146           
  Partials        ?        0           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request significantly improves test coverage by adding several new test files. The tests are well-structured. My review focuses on leveraging pytest's features, like parametrization, to make the tests more concise, robust, and maintainable. I've also identified a case of incomplete test coverage in one of the new tests. Overall, great work on bolstering the test suite.

Comment on lines +162 to +189
class TestBaseTypeMinMax:
def test_max_values(self):
assert BaseType.ENUM.max == 0xff
assert BaseType.SINT8.max == 0x7f
assert BaseType.UINT8.max == 0xff
assert BaseType.SINT16.max == 0x7fff
assert BaseType.UINT16.max == 0xffff
assert BaseType.SINT32.max == 0x7fffffff
assert BaseType.UINT32.max == 0xffffffff
assert BaseType.STRING.max is None
assert BaseType.FLOAT32.max is None
assert BaseType.FLOAT64.max is None
assert BaseType.SINT64.max == 0x7fffffffffffffff
assert BaseType.UINT64.max == 0xffffffffffffffff

def test_min_values(self):
assert BaseType.ENUM.min == 0x00
assert BaseType.SINT8.min == -0x80
assert BaseType.UINT8.min == 0x00
assert BaseType.SINT16.min == -0x8000
assert BaseType.UINT16.min == 0x0000
assert BaseType.SINT32.min == -0x80000000
assert BaseType.UINT32.min == 0x00000000
assert BaseType.STRING.min is None
assert BaseType.FLOAT32.min is None
assert BaseType.FLOAT64.min is None
assert BaseType.SINT64.min == -0x8000000000000000
assert BaseType.UINT64.min == 0x0000000000000000
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The tests for min and max properties are incomplete. They are missing checks for UINT8Z, UINT16Z, UINT32Z, BYTE, and UINT64Z. Since this PR is about improving test coverage, it's important to have complete tests.

I've refactored the tests to use pytest.mark.parametrize and included all BaseType members for completeness.

class TestBaseTypeMinMax:
    @pytest.mark.parametrize("base_type, expected_max", [
        (BaseType.ENUM, 0xff),
        (BaseType.SINT8, 0x7f),
        (BaseType.UINT8, 0xff),
        (BaseType.SINT16, 0x7fff),
        (BaseType.UINT16, 0xffff),
        (BaseType.SINT32, 0x7fffffff),
        (BaseType.UINT32, 0xffffffff),
        (BaseType.STRING, None),
        (BaseType.FLOAT32, None),
        (BaseType.FLOAT64, None),
        (BaseType.UINT8Z, 0xff),
        (BaseType.UINT16Z, 0xffff),
        (BaseType.UINT32Z, 0xffffffff),
        (BaseType.BYTE, 0xff),
        (BaseType.SINT64, 0x7fffffffffffffff),
        (BaseType.UINT64, 0xffffffffffffffff),
        (BaseType.UINT64Z, 0xffffffffffffffff),
    ])
    def test_max_values(self, base_type, expected_max):
        assert base_type.max == expected_max

    @pytest.mark.parametrize("base_type, expected_min", [
        (BaseType.ENUM, 0x00),
        (BaseType.SINT8, -0x80),
        (BaseType.UINT8, 0x00),
        (BaseType.SINT16, -0x8000),
        (BaseType.UINT16, 0x0000),
        (BaseType.SINT32, -0x80000000),
        (BaseType.UINT32, 0x00000000),
        (BaseType.STRING, None),
        (BaseType.FLOAT32, None),
        (BaseType.FLOAT64, None),
        (BaseType.UINT8Z, 0x00),
        (BaseType.UINT16Z, 0x0000),
        (BaseType.UINT32Z, 0x00000000),
        (BaseType.BYTE, 0x00),
        (BaseType.SINT64, -0x8000000000000000),
        (BaseType.UINT64, 0x0000000000000000),
        (BaseType.UINT64Z, 0x0000000000000000),
    ])
    def test_min_values(self, base_type, expected_min):
        assert base_type.min == expected_min

Comment on lines +5 to +29
class TestBaseTypeSize:
def test_size_1_byte_types(self):
assert BaseType.ENUM.size == 1
assert BaseType.SINT8.size == 1
assert BaseType.UINT8.size == 1
assert BaseType.STRING.size == 1
assert BaseType.UINT8Z.size == 1
assert BaseType.BYTE.size == 1

def test_size_2_byte_types(self):
assert BaseType.SINT16.size == 2
assert BaseType.UINT16.size == 2
assert BaseType.UINT16Z.size == 2

def test_size_4_byte_types(self):
assert BaseType.SINT32.size == 4
assert BaseType.UINT32.size == 4
assert BaseType.FLOAT32.size == 4
assert BaseType.UINT32Z.size == 4

def test_size_8_byte_types(self):
assert BaseType.FLOAT64.size == 8
assert BaseType.SINT64.size == 8
assert BaseType.UINT64.size == 8
assert BaseType.UINT64Z.size == 8
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The tests for BaseType.size can be made more concise and robust by using pytest.mark.parametrize. This allows you to combine all size checks into a single, data-driven test, which is easier to read and maintain. If any case fails, pytest will report it individually.

class TestBaseTypeSize:
    @pytest.mark.parametrize("base_type, expected_size", [
        (BaseType.ENUM, 1),
        (BaseType.SINT8, 1),
        (BaseType.UINT8, 1),
        (BaseType.STRING, 1),
        (BaseType.UINT8Z, 1),
        (BaseType.BYTE, 1),
        (BaseType.SINT16, 2),
        (BaseType.UINT16, 2),
        (BaseType.UINT16Z, 2),
        (BaseType.SINT32, 4),
        (BaseType.UINT32, 4),
        (BaseType.FLOAT32, 4),
        (BaseType.UINT32Z, 4),
        (BaseType.FLOAT64, 8),
        (BaseType.SINT64, 8),
        (BaseType.UINT64, 8),
        (BaseType.UINT64Z, 8),
    ])
    def test_size(self, base_type, expected_size):
        assert base_type.size == expected_size

Comment on lines +11 to +33
class TestEpochConversions:
def test_to_seconds_since_1989_epoch(self):
ms_epoch = MILLISECONDS_EPOCH_1989_DELTA
assert to_seconds_since_1989_epoch(ms_epoch) == 0

ms_epoch = MILLISECONDS_EPOCH_1989_DELTA + 1000
assert to_seconds_since_1989_epoch(ms_epoch) == 1

ms_epoch = MILLISECONDS_EPOCH_1989_DELTA + 60000
assert to_seconds_since_1989_epoch(ms_epoch) == 60

def test_to_milliseconds_since_epoch(self):
assert to_milliseconds_since_epoch(0) == MILLISECONDS_EPOCH_1989_DELTA

assert to_milliseconds_since_epoch(1) == MILLISECONDS_EPOCH_1989_DELTA + 1000

assert to_milliseconds_since_epoch(60) == MILLISECONDS_EPOCH_1989_DELTA + 60000

def test_round_trip_conversion(self):
original_ms = 1700000000000
seconds = to_seconds_since_1989_epoch(original_ms)
result_ms = to_milliseconds_since_epoch(seconds)
assert result_ms == (original_ms // 1000) * 1000
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

These tests can be made more concise and easier to extend by using pytest.mark.parametrize. This avoids having multiple assertions for different cases scattered in the test method body.

class TestEpochConversions:
    @pytest.mark.parametrize("ms_epoch, expected_seconds", [
        (MILLISECONDS_EPOCH_1989_DELTA, 0),
        (MILLISECONDS_EPOCH_1989_DELTA + 1000, 1),
        (MILLISECONDS_EPOCH_1989_DELTA + 60000, 60),
    ])
    def test_to_seconds_since_1989_epoch(self, ms_epoch, expected_seconds):
        assert to_seconds_since_1989_epoch(ms_epoch) == expected_seconds

    @pytest.mark.parametrize("seconds, ms_offset", [
        (0, 0),
        (1, 1000),
        (60, 60000),
    ])
    def test_to_milliseconds_since_epoch(self, seconds, ms_offset):
        expected_ms = MILLISECONDS_EPOCH_1989_DELTA + ms_offset
        assert to_milliseconds_since_epoch(seconds) == expected_ms

    @pytest.mark.parametrize("original_ms", [
        1700000000000,
        MILLISECONDS_EPOCH_1989_DELTA,
        MILLISECONDS_EPOCH_1989_DELTA + 1234,  # Test precision loss
    ])
    def test_round_trip_conversion(self, original_ms):
        seconds = to_seconds_since_1989_epoch(original_ms)
        result_ms = to_milliseconds_since_epoch(seconds)
        assert result_ms == (original_ms // 1000) * 1000

Comment on lines +55 to +59
def test_round_trip_conversion(self):
for degrees in [0, 45, 90, -90, 180, -180, 37.7749, -122.4194]:
semicircles = to_semicircles(degrees)
result = to_degrees(semicircles)
assert abs(result - degrees) < 0.0001
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This test can be improved by using pytest.mark.parametrize for the test cases and pytest.approx for floating-point comparisons. This is more idiomatic and robust.

Suggested change
def test_round_trip_conversion(self):
for degrees in [0, 45, 90, -90, 180, -180, 37.7749, -122.4194]:
semicircles = to_semicircles(degrees)
result = to_degrees(semicircles)
assert abs(result - degrees) < 0.0001
@pytest.mark.parametrize("degrees", [0, 45, 90, -90, 180, -180, 37.7749, -122.4194])
def test_round_trip_conversion(self, degrees):
semicircles = to_semicircles(degrees)
result = to_degrees(semicircles)
assert result == pytest.approx(degrees, abs=1e-7)

Comment on lines +5 to +57
class TestFieldComponent:
def test_init(self):
component = FieldComponent(
field_id=1,
accumulate=True,
bits=8,
scale=1.0,
offset=0.0,
)
assert component.field_id == 1
assert component.accumulate is True
assert component.bits == 8
assert component.scale == 1.0
assert component.offset == 0.0

def test_init_with_different_values(self):
component = FieldComponent(
field_id=255,
accumulate=False,
bits=16,
scale=100.0,
offset=500.0,
)
assert component.field_id == 255
assert component.accumulate is False
assert component.bits == 16
assert component.scale == 100.0
assert component.offset == 500.0

def test_init_with_zero_values(self):
component = FieldComponent(
field_id=0,
accumulate=False,
bits=0,
scale=0.0,
offset=0.0,
)
assert component.field_id == 0
assert component.accumulate is False
assert component.bits == 0
assert component.scale == 0.0
assert component.offset == 0.0

def test_init_with_negative_offset(self):
component = FieldComponent(
field_id=10,
accumulate=True,
bits=32,
scale=0.001,
offset=-500.0,
)
assert component.field_id == 10
assert component.offset == -500.0
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The tests for FieldComponent.__init__ are repetitive. You can combine them into a single, data-driven test using pytest.mark.parametrize. This reduces code duplication and makes it easier to add new test cases in the future.

class TestFieldComponent:
    @pytest.mark.parametrize("field_id, accumulate, bits, scale, offset", [
        (1, True, 8, 1.0, 0.0),
        (255, False, 16, 100.0, 500.0),
        (0, False, 0, 0.0, 0.0),
        (10, True, 32, 0.001, -500.0),
    ])
    def test_init(self, field_id, accumulate, bits, scale, offset):
        component = FieldComponent(
            field_id=field_id,
            accumulate=accumulate,
            bits=bits,
            scale=scale,
            offset=offset,
        )
        assert component.field_id == field_id
        assert component.accumulate is accumulate
        assert component.bits == bits
        assert component.scale == scale
        assert component.offset == offset

Signed-off-by: friendsa <github@duanfei.org>
@shaonianche shaonianche merged commit d887272 into main Jan 29, 2026
8 checks passed
shaonianche added a commit that referenced this pull request Feb 5, 2026
fix(fit_tool/__init__.py): update SDK_VERSION to 21.171.00 #13
@shaonianche shaonianche deleted the improve-test-coverage branch February 26, 2026 02:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant