Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add new hash expire cmd to pika #2883

Open
wants to merge 1 commit into
base: unstable
Choose a base branch
from

Conversation

bigdaronlee163
Copy link
Contributor

@bigdaronlee163 bigdaronlee163 commented Aug 27, 2024

  1. pkhget pkhset
  2. pkhexpire pkhexpireat
  3. pkhexpiretime pkhpersist
  4. pkhttl

Summary by CodeRabbit

  • New Features

    • Introduced a new PKHash functionality with multiple commands for managing hash data structures, including setting, getting, deleting, and expiration of fields.
    • Added support for batch operations and scanning capabilities within hashes.
    • New methods for handling expiration and persistence of hash entries.
    • Implemented comprehensive error handling for all new PKHash operations.
  • Tests

    • Added a comprehensive suite of unit tests for PKHashes, covering various operations including expiration, CRUD operations, batch handling, and scanning.
  • Documentation

    • Updated comments and organization in the codebase for clarity and readability.

@github-actions github-actions bot added the ✏️ Feature New feature or request label Aug 27, 2024
Copy link

coderabbitai bot commented Aug 27, 2024

Walkthrough

A new enumerator value PKHASH has been added to the AclCategory enum in include/acl.h. The changes also introduce a series of new command constants in include/pika_command.h and a comprehensive set of command classes in include/pika_pkhash.h for managing PKHash operations. Various methods related to these commands have been implemented across multiple files, including redis.cc, storage.cc, and new unit tests in pkhashes_test.cc. These modifications extend the functionality of the codebase and improve its structure and readability.

Changes

File Path Change Summary
include/acl.h Added enumerator PKHASH to AclCategory enum.
include/pika_command.h Introduced new command constants for PKHash operations and added kCmdFlagsPKHash to CmdFlags enum.
include/pika_pkhash.h Added multiple command classes for PKHash operations, including PKHExpireCmd, PKHGetCmd, PKHSetCmd, etc.
src/pika_client_conn.cc Minor formatting change; added a blank line after external variable declarations.
src/pika_command.cc Modified InitCmdTable to include new PKHash commands and improved formatting of existing command initializations.
src/pika_pkhash.cc Implemented command classes for managing hash operations, including methods for expiration, retrieval, and setting values.
src/storage/include/storage/storage.h Added FieldValueTTL struct and new methods for PKHash operations.
src/storage/include/storage/storage_define.h Added enum value kPKHashDataCF to ColumnFamilyIndex.
src/storage/src/base_filter.h Updated header inclusions and added class aliases for PKHashes.
src/storage/src/base_value_format.h Expanded DataType enum to include kPKHashes and updated related structures.
src/storage/src/pkhash_data_value_format.h Introduced classes PKHashDataValue and ParsedPKHashDataValue for encoding and parsing data formats.
src/storage/src/redis.cc Reformatted header inclusions, constructor, and added functionality for new column family.
src/storage/src/redis.h Added methods related to PKHash operations and updated existing method signatures for consistency.
src/storage/src/redis_hashes.cc Reformatted error message handling and updated header inclusions.
src/storage/src/redis_pkhashes.cc Implemented methods for managing PKHashes, including CRUD operations and expiration handling.
src/storage/src/redis_strings.cc Updated method signatures and error handling for consistency.
src/storage/src/storage.cc Added new methods for PKHash commands and adjusted method signatures for clarity.
src/storage/tests/hashes_test.cc Included necessary header files and made minor comment adjustments.
src/storage/tests/pkhashes_test.cc Added comprehensive unit tests for PKHash functionality using Google Test framework.

Possibly related PRs

  • feat: dont store cmd(Get/Set/Hget/Hset) with too large key in cache #2849: This PR introduces a new method IsTooLargeKey in the Cmd class and related classes, which checks if a key exceeds a specified size limit. This is relevant as it directly relates to managing key sizes, which is a concern in the main PR's context of adding a new enumerator value PKHASH that may involve key management.
  • feat:add test for redis commands, including LPush, RPushX, BgSave, FlushDb and SetEx #2901: This PR enhances testing coverage for Redis commands, including those related to key management. It indirectly relates to the main PR by ensuring that commands handling the new PKHASH category are properly tested, which may involve key size considerations.
  • fix: compilation error #2934: This PR addresses a compilation error related to the handling of compaction in Redis. While it does not directly relate to the changes in the main PR, it reflects ongoing improvements in the Redis codebase that may impact how new features, like the PKHASH enumerator, are integrated and managed.

Suggested labels

🧹 Updates, ☢️ Bug

Suggested reviewers

  • chejinge
  • AlexStocks
  • Mixficsol

Poem

In the burrow where data hops,
New commands and enums, oh how it pops!
PKHash is here, with functions galore,
Expire and retrieve, who could ask for more?
With tests all set, let the code run free,
Hopping through hashes, as happy as can be! 🐰✨


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Outside diff range, codebase verification and nitpick comments (14)
src/storage/src/redis_pkhashes.cc (7)

23-63: Consider adding logging for better debugging and monitoring.

The function is correctly implemented with proper error handling and validation checks. Adding logging can help in better debugging and monitoring.

Consider adding logging statements at key points in the function.


66-142: Consider adding logging for better debugging and monitoring.

The function is correctly implemented with proper error handling and validation checks. Adding logging can help in better debugging and monitoring.

Consider adding logging statements at key points in the function.


144-205: Consider adding logging for better debugging and monitoring.

The function is correctly implemented with proper error handling and validation checks. Adding logging can help in better debugging and monitoring.

Consider adding logging statements at key points in the function.


207-274: Consider adding logging for better debugging and monitoring.

The function is correctly implemented with proper error handling and validation checks. Adding logging can help in better debugging and monitoring.

Consider adding logging statements at key points in the function.


276-333: Consider adding logging for better debugging and monitoring.

The function is correctly implemented with proper error handling and validation checks. Adding logging can help in better debugging and monitoring.

Consider adding logging statements at key points in the function.


335-394: Consider adding logging for better debugging and monitoring.

The function is correctly implemented with proper error handling and validation checks. Adding logging can help in better debugging and monitoring.

Consider adding logging statements at key points in the function.


397-452: Consider adding logging for better debugging and monitoring.

The function is correctly implemented with proper error handling and validation checks. Adding logging can help in better debugging and monitoring.

Consider adding logging statements at key points in the function.

src/storage/src/storage.cc (6)

480-484: Review Error Handling in GetDBInstance

The PKHExpire function includes error handling for invalid TTL values, which is appropriate. However, the GetDBInstance function lacks explicit error handling, which might be a concern if the slot index retrieval or instance access could fail. Consider adding error handling to GetDBInstance to ensure robustness.

  • GetDBInstance: No explicit error handling observed.
  • PKHExpire: Handles invalid TTL values correctly.
Analysis chain

LGTM! Verify error handling.

The function is correctly implemented. Ensure that GetDBInstance and PKHExpire handle errors correctly.

The code changes are approved.

Run the following script to verify error handling:

Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify error handling in `GetDBInstance` and `PKHExpire`.

# Test: Search for the function implementations. Expect: Proper error handling.
rg --type cpp -A 5 $'std::unique_ptr<Redis>& Storage::GetDBInstance'
rg --type cpp -A 5 $'Status Redis::PKHExpire'

Length of output: 2096


486-490: Potential Issue: Lack of Error Handling in GetDBInstance

The GetDBInstance function does not have explicit error handling, which could lead to issues if an invalid instance is returned. Consider adding error handling to ensure robustness. The PKHExpireat function in Redis correctly handles invalid timestamps.

  • Review the GetDBInstance function for potential error handling improvements.
Analysis chain

LGTM! Verify error handling.

The function is correctly implemented. Ensure that GetDBInstance and PKHExpireat handle errors correctly.

The code changes are approved.

Run the following script to verify error handling:

Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify error handling in `GetDBInstance` and `PKHExpireat`.

# Test: Search for the function implementations. Expect: Proper error handling.
rg --type cpp -A 5 $'std::unique_ptr<Redis>& Storage::GetDBInstance'
rg --type cpp -A 5 $'Status Redis::PKHExpireat'

Length of output: 1159


Script:

#!/bin/bash
# Description: Check for error handling related to `GetDBInstance`.

# Search for any error handling or checks related to `GetDBInstance`.
rg --type cpp 'GetDBInstance' -A 3 | grep -i 'if\|try\|catch\|error\|exception'

Length of output: 1873


504-508: Improve Error Handling in GetDBInstance

The GetDBInstance function lacks explicit error handling, which could lead to issues if an invalid index is accessed. Consider adding error checks to handle such cases. The PKHTTL function in Redis has partial error handling for specific conditions, which is a positive aspect.

  • File: src/storage/src/storage.cc
  • Function: GetDBInstance
Analysis chain

LGTM! Verify error handling.

The function is correctly implemented. Ensure that GetDBInstance and PKHTTL handle errors correctly.

The code changes are approved.

Run the following script to verify error handling:

Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify error handling in `GetDBInstance` and `PKHTTL`.

# Test: Search for the function implementations. Expect: Proper error handling.
rg --type cpp -A 5 $'std::unique_ptr<Redis>& Storage::GetDBInstance'
rg --type cpp -A 5 $'Status Redis::PKHTTL'

Length of output: 1084


Script:

#!/bin/bash
# Description: Gather more information on error handling in `GetDBInstance` and `PKHTTL`.

# Check for error handling in `GetDBInstance`.
rg --type cpp -A 10 'std::unique_ptr<Redis>& Storage::GetDBInstance' 

# Check for error handling in `PKHTTL`.
rg --type cpp -A 20 'Status Redis::PKHTTL'

Length of output: 2518


510-513: Ensure Robust Error Handling in GetDBInstance

The PKHGet function in Redis has appropriate error handling for type and existence checks. However, GetDBInstance lacks explicit error handling, which could lead to issues if the instance retrieval fails. Consider adding error handling to ensure robustness.

  • GetDBInstance in src/storage/src/storage.cc lacks error handling.
  • PKHGet in src/storage/src/redis_pkhashes.cc has adequate error handling.
Analysis chain

LGTM! Verify error handling.

The function is correctly implemented. Ensure that GetDBInstance and PKHGet handle errors correctly.

The code changes are approved.

Run the following script to verify error handling:

Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify error handling in `GetDBInstance` and `PKHGet`.

# Test: Search for the function implementations. Expect: Proper error handling.
rg --type cpp -A 5 $'std::unique_ptr<Redis>& Storage::GetDBInstance'
rg --type cpp -A 5 $'Status Redis::PKHGet'

Length of output: 1049


Script:

#!/bin/bash
# Description: Gather more context around `GetDBInstance` and `PKHGet` for error handling.

# Search for the full implementation of `GetDBInstance` to check for error handling.
rg --type cpp -A 20 'std::unique_ptr<Redis>& Storage::GetDBInstance'

# Search for the full implementation of `PKHGet` to check for error handling.
rg --type cpp -A 20 'Status Redis::PKHGet'

Length of output: 3083


498-502: Improve Error Handling in GetDBInstance

The GetDBInstance function lacks explicit error handling, which could lead to issues if an invalid index is accessed. Consider adding checks to ensure the index is valid before accessing the instance. The PKHPersist function has some error handling mechanisms in place, which is good practice.

  • File: src/storage/src/storage.cc
  • Function: GetDBInstance
Analysis chain

LGTM! Verify error handling.

The function is correctly implemented. Ensure that GetDBInstance and PKHPersist handle errors correctly.

The code changes are approved.

Run the following script to verify error handling:

Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify error handling in `GetDBInstance` and `PKHPersist`.

# Test: Search for the function implementations. Expect: Proper error handling.
rg --type cpp -A 5 $'std::unique_ptr<Redis>& Storage::GetDBInstance'
rg --type cpp -A 5 $'Status Redis::PKHPersist'

Length of output: 1096


Script:

#!/bin/bash
# Description: Retrieve complete implementations to verify error handling in `GetDBInstance` and `PKHPersist`.

# Search for the complete implementation of `GetDBInstance` to check for error handling.
rg --type cpp -A 20 $'std::unique_ptr<Redis>& Storage::GetDBInstance'

# Search for the complete implementation of `PKHPersist` to check for error handling.
rg --type cpp -A 20 $'Status Redis::PKHPersist'

Length of output: 3101


515-518: Add Error Handling to GetDBInstance and Verify Redis::PKHSet

  • GetDBInstance should include error handling to manage invalid instance indices.
  • Ensure comprehensive error handling in Redis::PKHSet to cover all possible error conditions.

Please review and address these points to improve the robustness of the code.

Analysis chain

LGTM! Verify error handling.

The function is correctly implemented. Ensure that GetDBInstance and PKHSet handle errors correctly.

The code changes are approved.

Run the following script to verify error handling:

Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify error handling in `GetDBInstance` and `PKHSet`.

# Test: Search for the function implementations. Expect: Proper error handling.
rg --type cpp -A 5 $'std::unique_ptr<Redis>& Storage::GetDBInstance'
rg --type cpp -A 5 $'Status Redis::PKHSet'

Length of output: 1054


Script:

#!/bin/bash
# Description: Gather more lines to verify error handling in `GetDBInstance` and `Redis::PKHSet`.

# Get additional context for `GetDBInstance`
rg --type cpp -A 10 $'std::unique_ptr<Redis>& Storage::GetDBInstance(const std::string& key)'

# Get additional context for `Redis::PKHSet`
rg --type cpp -A 10 $'Status Redis::PKHSet(const Slice& key, const Slice& field, const Slice& value, int32_t* res)'

Length of output: 211


Script:

#!/bin/bash
# Description: Refine search to verify error handling in `GetDBInstance` and `Redis::PKHSet`.

# Search for `GetDBInstance` with more context
rg --type cpp -A 15 'std::unique_ptr<Redis>& Storage::GetDBInstance'

# Search for `Redis::PKHSet` with more context
rg --type cpp -A 15 'Status Redis::PKHSet'

Length of output: 2388

src/pika_command.cc (1)

19-19: Add include guard for pika_pkhash.h.

Ensure that the header file pika_pkhash.h is included only once to prevent potential redefinition errors.

Consider adding an include guard in pika_pkhash.h if it doesn't already exist.

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 4bfb5e7 and 8d150b6.

Files selected for processing (16)
  • include/acl.h (1 hunks)
  • include/pika_command.h (6 hunks)
  • include/pika_pkhash.h (1 hunks)
  • src/pika_client_conn.cc (1 hunks)
  • src/pika_command.cc (10 hunks)
  • src/pika_pkhash.cc (1 hunks)
  • src/storage/include/storage/storage.h (18 hunks)
  • src/storage/include/storage/storage_define.h (2 hunks)
  • src/storage/src/base_filter.h (6 hunks)
  • src/storage/src/base_value_format.h (5 hunks)
  • src/storage/src/pkhash_data_value_format.h (1 hunks)
  • src/storage/src/redis.cc (12 hunks)
  • src/storage/src/redis.h (17 hunks)
  • src/storage/src/redis_hashes.cc (37 hunks)
  • src/storage/src/redis_pkhashes.cc (1 hunks)
  • src/storage/src/storage.cc (36 hunks)
Files skipped from review due to trivial changes (2)
  • src/pika_client_conn.cc
  • src/storage/src/redis_hashes.cc
Additional comments not posted (147)
src/storage/include/storage/storage_define.h (2)

47-47: LGTM!

The addition of the new enumerator kPKHashDataCF is correct and consistent with the existing enumerators.

The code changes are approved.


124-132: LGTM!

The formatting changes improve readability without altering the logic of the function.

The code changes are approved.

src/storage/src/base_value_format.h (3)

21-31: LGTM!

The addition of the new enumerators kPKHashes and kNones is correct and consistent with the existing enumerators.

The code changes are approved.


34-35: LGTM!

The update to the DataTypeStrings array ensures that all enumerators have corresponding string values.

The code changes are approved.


52-53: LGTM!

The formatting changes improve readability without altering the logic of the code.

The code changes are approved.

Also applies to: 100-101, 153-153, 163-167

include/pika_pkhash.h (7)

14-38: LGTM!

The class PKHExpireCmd is correctly implemented.

The code changes are approved.


40-64: LGTM!

The class PKHExpireatCmd is correctly implemented.

The code changes are approved.


65-89: LGTM!

The class PKHExpiretimeCmd is correctly implemented.

The code changes are approved.


91-115: LGTM!

The class PKHPersistCmd is correctly implemented.

The code changes are approved.


117-141: LGTM!

The class PKHTTLCmd is correctly implemented.

The code changes are approved.


143-164: LGTM!

The class PKHGetCmd is correctly implemented.

The code changes are approved.


166-187: LGTM!

The class PKHSetCmd is correctly implemented.

The code changes are approved.

src/pika_pkhash.cc (14)

16-38: LGTM!

The method PKHExpireCmd::DoInitial is correctly implemented.

The code changes are approved.


40-53: LGTM!

The method PKHExpireCmd::Do is correctly implemented.

The code changes are approved.


55-80: LGTM!

The method PKHExpireatCmd::DoInitial is correctly implemented.

The code changes are approved.


81-94: LGTM!

The method PKHExpireatCmd::Do is correctly implemented.

The code changes are approved.


96-116: LGTM!

The method PKHExpiretimeCmd::DoInitial is correctly implemented.

The code changes are approved.


117-130: LGTM!

The method PKHExpiretimeCmd::Do is correctly implemented.

The code changes are approved.


132-150: LGTM!

The method PKHPersistCmd::DoInitial is correctly implemented.

The code changes are approved.


151-164: LGTM!

The method PKHPersistCmd::Do is correctly implemented.

The code changes are approved.


166-184: LGTM!

The method PKHTTLCmd::DoInitial is correctly implemented.

The code changes are approved.


185-198: LGTM!

The method PKHTTLCmd::Do is correctly implemented.

The code changes are approved.


200-207: LGTM!

The method PKHGetCmd::DoInitial is correctly implemented.

The code changes are approved.


209-222: LGTM!

The method PKHGetCmd::Do is correctly implemented.

The code changes are approved.


244-252: LGTM!

The method PKHSetCmd::DoInitial is correctly implemented.

The code changes are approved.


254-265: LGTM!

The method PKHSetCmd::Do is correctly implemented.

The code changes are approved.

src/storage/src/base_filter.h (5)

Line range hint 21-74: LGTM!

The class BaseMetaFilter is correctly implemented.

The code changes are approved.


Line range hint 114-228: LGTM!

The class BaseDataFilter is correctly implemented. The constructor reformatting improves readability.

The code changes are approved.


250-250: LGTM!

The type alias PKHashesMetaFilter is correctly implemented.

The code changes are approved.


251-251: LGTM!

The type alias PKHashesMetaFilterFactory is correctly implemented.

The code changes are approved.


252-252: LGTM!

The type alias PKHashesDataFilter is correctly implemented.

The code changes are approved.

include/acl.h (1)

55-55: LGTM!

The addition of the PKHASH enumerator to the AclCategory enum is consistent with the PR objectives and summary.

The code changes are approved.

include/pika_command.h (14)

141-141: LGTM!

The addition of the kCmdNamePKHSet constant is consistent with the PR objectives and summary.

The code changes are approved.


142-142: LGTM!

The addition of the kCmdNamePKHSetex constant is consistent with the PR objectives and summary.

The code changes are approved.


143-143: LGTM!

The addition of the kCmdNamePKHExpire constant is consistent with the PR objectives and summary.

The code changes are approved.


144-144: LGTM!

The addition of the kCmdNamePKHExpireat constant is consistent with the PR objectives and summary.

The code changes are approved.


145-145: LGTM!

The addition of the kCmdNamePKHExpiretime constant is consistent with the PR objectives and summary.

The code changes are approved.


146-146: LGTM!

The addition of the kCmdNamePKHTTL constant is consistent with the PR objectives and summary.

The code changes are approved.


147-147: LGTM!

The addition of the kCmdNamePKHPersist constant is consistent with the PR objectives and summary.

The code changes are approved.


148-148: LGTM!

The addition of the kCmdNamePKHGet constant is consistent with the PR objectives and summary.

The code changes are approved.


149-149: LGTM!

The addition of the kCmdNamePKHExists constant is consistent with the PR objectives and summary.

The code changes are approved.


150-150: LGTM!

The addition of the kCmdNamePKHDel constant is consistent with the PR objectives and summary.

The code changes are approved.


151-151: LGTM!

The addition of the kCmdNamePKHLen constant is consistent with the PR objectives and summary.

The code changes are approved.


152-152: LGTM!

The addition of the kCmdNamePKHStrlen constant is consistent with the PR objectives and summary.

The code changes are approved.


153-153: LGTM!

The addition of the kCmdNamePKHIncrby constant is consistent with the PR objectives and summary.

The code changes are approved.


154-154: LGTM!

The addition of the kCmdNamePKHIncrbyfloat constant is consistent with the PR objectives and summary.

The code changes are approved.

src/storage/src/redis.cc (3)

Line range hint 30-42: LGTM!

The constructor is correctly initializing the new column family options for pika_hash_data_cf.

The code changes are approved.


102-111: LGTM!

The function is correctly setting up the new column family options for pika_hash_data_cf.

The code changes are approved.


218-218: LGTM!

The function is correctly including kPKHashDataCF in the list of column families to compact.

The code changes are approved.

src/storage/src/redis.h (8)

246-248: LGTM!

The function is correctly implemented to retrieve column family handles for PK Hashes.

The code changes are approved.


253-254: LGTM!

The function is correctly implemented to set expiration time for PK Hash fields.

The code changes are approved.


255-256: LGTM!

The function is correctly implemented to set expiration time for PK Hash fields based on a timestamp.

The code changes are approved.


257-258: LGTM!

The function is correctly implemented to retrieve expiration times for PK Hash fields.

The code changes are approved.


259-260: LGTM!

The function is correctly implemented to retrieve TTL for PK Hash fields.

The code changes are approved.


261-262: LGTM!

The function is correctly implemented to make PK Hash fields persistent by removing their expiration.

The code changes are approved.


263-263: LGTM!

The function is correctly implemented to retrieve the value of a PK Hash field.

The code changes are approved.


264-264: LGTM!

The function is correctly implemented to set the value of a PK Hash field.

The code changes are approved.

src/storage/include/storage/storage.h (16)

26-26: LGTM!

The reordering of include statements is acceptable.

The code changes are approved.


28-28: LGTM!

The inclusion of the new header file is necessary for the new functionality.

The code changes are approved.


Line range hint 98-104: LGTM!

The addition of the operator+ method for KeyInfo struct is correct and useful for combining key information.

The code changes are approved.


111-113: LGTM!

The addition of the ttl field to ValueStatus struct is necessary for managing time-to-live information.

The code changes are approved.


157-159: LGTM!

The addition of the Operation enum is useful for defining various operations in the background task.

The code changes are approved.


171-173: LGTM!

The addition of the constructor and destructor for the Storage class is necessary for proper initialization and cleanup.

The code changes are approved.


256-257: LGTM!

The method signature update ensures consistent formatting and clarity.

The code changes are approved.


262-263: LGTM!

The method signature update ensures consistent formatting and clarity.

The code changes are approved.


272-273: LGTM!

The method signature update ensures consistent formatting and clarity.

The code changes are approved.


482-483: LGTM!

The method signature update ensures consistent formatting and clarity.

The code changes are approved.


506-507: LGTM!

The method signature update ensures consistent formatting and clarity.

The code changes are approved.


516-517: LGTM!

The method signature update ensures consistent formatting and clarity.

The code changes are approved.


590-591: LGTM!

The method signature update ensures consistent formatting and clarity.

The code changes are approved.


753-754: LGTM!

The method signature update ensures consistent formatting and clarity.

The code changes are approved.


1000-1001: LGTM!

The method signature update ensures consistent formatting and clarity.

The code changes are approved.


1035-1036: LGTM!

The method signature update ensures consistent formatting and clarity.

The code changes are approved.

src/storage/src/storage.cc (1)

492-496: LGTM! Verify error handling.

The function is correctly implemented. Ensure that GetDBInstance and PKHExpiretime handle errors correctly.

The code changes are approved.

Run the following script to verify error handling:

src/pika_command.cc (73)

486-487: Correct initialization of PKHSetCmd.

The command PKHSetCmd is correctly initialized and inserted into the command table.

The code changes are approved.


490-492: Correct initialization of PKHExpireCmd.

The command PKHExpireCmd is correctly initialized and inserted into the command table.

The code changes are approved.


494-496: Correct initialization of PKHExpireatCmd.

The command PKHExpireatCmd is correctly initialized and inserted into the command table.

The code changes are approved.


498-500: Correct initialization of PKHExpiretimeCmd.

The command PKHExpiretimeCmd is correctly initialized and inserted into the command table.

The code changes are approved.


502-503: Correct initialization of PKHTTLCmd.

The command PKHTTLCmd is correctly initialized and inserted into the command table.

The code changes are approved.


506-508: Correct initialization of PKHPersistCmd.

The command PKHPersistCmd is correctly initialized and inserted into the command table.

The code changes are approved.


510-511: Correct initialization of PKHGetCmd.

The command PKHGetCmd is correctly initialized and inserted into the command table.

The code changes are approved.


56-57: Consistent formatting for CompactCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


60-61: Consistent formatting for CompactRangeCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


80-81: Consistent formatting for FlushallCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


85-86: Consistent formatting for FlushdbCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


159-160: Consistent formatting for ClearCacheCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


162-163: Consistent formatting for LastsaveCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


247-248: Consistent formatting for SetCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


251-253: Consistent formatting for GetCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


256-258: Consistent formatting for DelCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


264-265: Consistent formatting for IncrCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


269-269: Consistent formatting for IncrbyCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


273-274: Consistent formatting for IncrbyfloatCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


277-278: Consistent formatting for DecrCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


282-282: Consistent formatting for DecrbyCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


286-286: Consistent formatting for GetsetCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


290-290: Consistent formatting for AppendCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


293-295: Consistent formatting for MgetCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


303-303: Consistent formatting for SetnxCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


306-307: Consistent formatting for SetexCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


310-311: Consistent formatting for PsetexCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


315-315: Consistent formatting for DelvxCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


318-319: Consistent formatting for MsetCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


322-323: Consistent formatting for MsetnxCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


327-328: Consistent formatting for GetrangeCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


332-332: Consistent formatting for SetrangeCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


335-337: Consistent formatting for StrlenCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


340-342: Consistent formatting for ExistsCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


346-347: Consistent formatting for ExpireCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


351-352: Consistent formatting for PexpireCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


355-357: Consistent formatting for ExpireatCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


360-362: Consistent formatting for PexpireatCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


365-366: Consistent formatting for TtlCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


369-370: Consistent formatting for PttlCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


373-375: Consistent formatting for PersistCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


378-379: Consistent formatting for TypeCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


391-391: Consistent formatting for PKSetexAtCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


394-395: Consistent formatting for PKScanRangeCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


398-399: Consistent formatting for PKRScanRangeCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


404-405: Consistent formatting for HDelCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


408-409: Consistent formatting for HSetCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


412-414: Consistent formatting for HGetCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


417-419: Consistent formatting for HGetallCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


422-424: Consistent formatting for HExistsCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


427-428: Consistent formatting for HIncrbyCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


431-433: Consistent formatting for HIncrbyfloatCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


436-438: Consistent formatting for HKeysCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


441-443: Consistent formatting for HLenCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


446-448: Consistent formatting for HMgetCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


451-452: Consistent formatting for HMsetCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


455-456: Consistent formatting for HSetnxCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


459-461: Consistent formatting for HStrlenCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


464-466: Consistent formatting for HValsCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


469-470: Consistent formatting for HScanCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


473-474: Consistent formatting for HScanxCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


477-478: Consistent formatting for PKHScanRangeCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


481-482: Consistent formatting for PKHRScanRangeCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


514-516: Consistent formatting for LIndexCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


518-519: Consistent formatting for LInsertCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


522-524: Consistent formatting for LLenCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


526-527: Consistent formatting for BLPopCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


530-531: Consistent formatting for LPopCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


535-535: Consistent formatting for LPushCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


538-539: Consistent formatting for LPushxCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


543-544: Consistent formatting for LRangeCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


546-547: Consistent formatting for LRemCmd.

The formatting change improves readability by aligning parameters across multiple lines.

The code changes are approved.


549-550: **Consistent formatting for

Comment on lines 25 to 58
class PKHashDataValue : public InternalValue {
public:
/*
* The header of the Value field is initially initialized to knulltype
*/
explicit PKHashDataValue(const rocksdb::Slice& user_value) : InternalValue(DataType::kNones, user_value) {}
virtual ~PKHashDataValue() {}

virtual rocksdb::Slice Encode() {
size_t usize = user_value_.size();
size_t needed = usize + kSuffixReserveLength + kTimestampLength * 2;
char* dst = ReAllocIfNeeded(needed);
char* start_pos = dst;

memcpy(dst, user_value_.data(), user_value_.size());
dst += user_value_.size();
memcpy(dst, reserve_, kSuffixReserveLength);
dst += kSuffixReserveLength;
EncodeFixed64(dst, ctime_);
dst += kTimestampLength;
EncodeFixed64(dst, etime_);
dst += kTimestampLength; // todo(DDD) 待确认,看这个是否需要。

return rocksdb::Slice(start_pos, needed);
}

private:
const size_t kDefaultValueSuffixLength = kSuffixReserveLength + kTimestampLength * 2;
};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! But address the TODO comment.

The PKHashDataValue class is correctly implemented. However, the TODO comment in the Encode method needs to be addressed.

The code changes are approved.

Do you want me to help address the TODO comment or open a GitHub issue to track this task?

Comment on lines 55 to 132
class ParsedPKHashDataValue : public ParsedInternalValue {
public:
// Use this constructor after rocksdb::DB::Get(), since we use this in
// the implement of user interfaces and may need to modify the
// original value suffix, so the value_ must point to the string
explicit ParsedPKHashDataValue(std::string* value) : ParsedInternalValue(value) {
if (value_->size() >= kPKHashDataValueSuffixLength) {
user_value_ = rocksdb::Slice(value_->data(), value_->size() - kPKHashDataValueSuffixLength);
memcpy(reserve_, value_->data() + user_value_.size(), kSuffixReserveLength);
ctime_ = DecodeFixed64(value_->data() + user_value_.size() + kSuffixReserveLength);
etime_ = DecodeFixed64(value_->data() + user_value_.size() + kSuffixReserveLength + kTimestampLength);
}
}

// Use this constructor in rocksdb::CompactionFilter::Filter(),
// since we use this in Compaction process, all we need to do is parsing
// the rocksdb::Slice, so don't need to modify the original value, value_ can be
// set to nullptr
explicit ParsedPKHashDataValue(const rocksdb::Slice& value) : ParsedInternalValue(value) {
if (value.size() >= kPKHashDataValueSuffixLength) {
user_value_ = rocksdb::Slice(value.data(), value.size() - kPKHashDataValueSuffixLength);
memcpy(reserve_, value.data() + user_value_.size(), kSuffixReserveLength);
ctime_ = DecodeFixed64(value.data() + user_value_.size() + kSuffixReserveLength);
etime_ = DecodeFixed64(value_->data() + user_value_.size() + kSuffixReserveLength + kTimestampLength);
}
}

virtual ~ParsedPKHashDataValue() = default;

void SetEtimeToValue() override {
if (value_) {
char* dst = const_cast<char*>(value_->data()) + value_->size() - kTimestampLength;
EncodeFixed64(dst, etime_);
}
}

void SetCtimeToValue() override {
if (value_) {
char* dst = const_cast<char*>(value_->data()) + value_->size() - kTimestampLength - kTimestampLength;
EncodeFixed64(dst, ctime_);
}
}

void SetReserveToValue() {
if (value_) {
char* dst = const_cast<char*>(value_->data()) + value_->size() - kPKHashDataValueSuffixLength;
memcpy(dst, reserve_, kSuffixReserveLength);
}
}

virtual void StripSuffix() override {
if (value_) {
value_->erase(value_->size() - kPKHashDataValueSuffixLength, kPKHashDataValueSuffixLength);
}
}

void SetTimestamp(int64_t timestamp) {
etime_ = timestamp;
SetEtimeToValue();
}

protected:
virtual void SetVersionToValue() override {};

private:
const size_t kPKHashDataValueSuffixLength = kSuffixReserveLength + kTimestampLength * 2;
};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! But address the empty SetVersionToValue method.

The ParsedPKHashDataValue class is correctly implemented. However, the SetVersionToValue method is empty and might need to be implemented.

The code changes are approved.

Do you want me to help implement the SetVersionToValue method or open a GitHub issue to track this task?

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Outside diff range and nitpick comments (1)
src/storage/tests/pkhashes_test.cc (1)

20-20: Avoid using 'using namespace' in the global scope

Using using namespace storage; in the global scope can lead to potential name collisions and reduces code maintainability. It's generally better to avoid importing entire namespaces into the global scope. Instead, consider prefixing with storage:: or using the namespace within a limited scope.

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 8d150b6 and 8ba7edd.

Files selected for processing (1)
  • src/storage/tests/pkhashes_test.cc (1 hunks)

Comment on lines 193 to 530
std::map<storage::DataType, rocksdb::Status> type_status;
db.Expire("B_HKEYS_KEY", 1);
ASSERT_TRUE(type_status[storage::DataType::kHashes].ok());
std::this_thread::sleep_for(std::chrono::milliseconds(2000));
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Incorrect error checking using uninitialized type_status in HKeys test

In the HKeys test, the type_status map is declared but not used. The assertion ASSERT_TRUE(type_status[storage::DataType::kHashes].ok()); is invalid because type_status is empty and uninitialized.

Apply this diff to correct the error checking:

-std::map<storage::DataType, rocksdb::Status> type_status;
 db.Expire("B_HKEYS_KEY", 1);
-ASSERT_TRUE(type_status[storage::DataType::kHashes].ok());
+ASSERT_TRUE(s.ok());

If db.Expire returns a status, modify the code to capture and assert it:

-std::map<storage::DataType, rocksdb::Status> type_status;
-storage::Status s = db.Expire("B_HKEYS_KEY", 1);
+storage::Status s = db.Expire("B_HKEYS_KEY", 1);
 ASSERT_TRUE(s.ok());
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
std::map<storage::DataType, rocksdb::Status> type_status;
db.Expire("B_HKEYS_KEY", 1);
ASSERT_TRUE(type_status[storage::DataType::kHashes].ok());
std::this_thread::sleep_for(std::chrono::milliseconds(2000));
storage::Status s = db.Expire("B_HKEYS_KEY", 1);
ASSERT_TRUE(s.ok());
std::this_thread::sleep_for(std::chrono::milliseconds(2000));

Comment on lines +27 to +33
void SetUp() override {
std::string path = "./db/pkhashes";
pstd::DeleteDirIfExist(path);
mkdir(path.c_str(), 0755);
storage_options.options.create_if_missing = true;
s = db.Open(storage_options, path);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Check the return status of db.Open in SetUp()

The return status of db.Open(storage_options, path); is assigned to s, but there is no check to verify if the operation was successful. Ignoring the status may lead to unexpected behavior if the database fails to open.

Apply this diff to add an assertion to ensure the database opens successfully:

 s = db.Open(storage_options, path);
+ASSERT_TRUE(s.ok());
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
void SetUp() override {
std::string path = "./db/pkhashes";
pstd::DeleteDirIfExist(path);
mkdir(path.c_str(), 0755);
storage_options.options.create_if_missing = true;
s = db.Open(storage_options, path);
}
void SetUp() override {
std::string path = "./db/pkhashes";
pstd::DeleteDirIfExist(path);
mkdir(path.c_str(), 0755);
storage_options.options.create_if_missing = true;
s = db.Open(storage_options, path);
ASSERT_TRUE(s.ok());
}

Comment on lines +95 to +98
std::map<storage::DataType, rocksdb::Status> type_status;
int ret = db->Expire(key, 1);
if ((ret == 0) || !type_status[storage::DataType::kHashes].ok()) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Unused variable type_status and incorrect error checking in make_expired

The variable type_status is declared but not used effectively. The condition !type_status[storage::DataType::kHashes].ok() is invalid because type_status is empty and uninitialized. This leads to improper error handling in the make_expired function.

Apply this diff to remove the unused variable and correct the error checking:

-static std::map<storage::DataType, rocksdb::Status> type_status;
 int ret = db->Expire(key, 1);
-if ((ret == 0) || !type_status[storage::DataType::kHashes].ok()) {
+if (ret == 0) {
     return false;
 }

Alternatively, if you need to check the status returned by Expire, modify the code as follows:

-static std::map<storage::DataType, rocksdb::Status> type_status;
-int ret = db->Expire(key, 1);
+storage::Status s = db->Expire(key, 1);
-if ((ret == 0) || !type_status[storage::DataType::kHashes].ok()) {
+if (!s.ok()) {
     return false;
 }
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
std::map<storage::DataType, rocksdb::Status> type_status;
int ret = db->Expire(key, 1);
if ((ret == 0) || !type_status[storage::DataType::kHashes].ok()) {
int ret = db->Expire(key, 1);
if (ret == 0) {
```
This suggestion removes the unused `type_status` variable and simplifies the condition to check only the return value of `db->Expire(key, 1)`.
Note: The second diff snippet provided in the review comment is also valid and could be used as an alternative solution. If you prefer that approach, the suggestion would be:
```suggestion
storage::Status s = db->Expire(key, 1);
if (!s.ok()) {

Comment on lines 134 to 136
std::map<storage::DataType, rocksdb::Status> type_status;
db.Expire("HDEL_TIMEOUT_KEY", 1);
ASSERT_TRUE(type_status[storage::DataType::kHashes].ok());
std::this_thread::sleep_for(std::chrono::milliseconds(2000));
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Incorrect error checking using uninitialized type_status in HDel test

In the HDel test, the type_status map is declared but not populated. The assertion ASSERT_TRUE(type_status[storage::DataType::kHashes].ok()); will not function correctly since type_status is empty, leading to invalid error checking.

Apply this diff to correct the error checking:

-std::map<storage::DataType, rocksdb::Status> type_status;
 db.Expire("HDEL_TIMEOUT_KEY", 1);
-ASSERT_TRUE(type_status[storage::DataType::kHashes].ok());
+ASSERT_TRUE(s.ok());

If db.Expire returns a status, capture and check it:

-std::map<storage::DataType, rocksdb::Status> type_status;
-storage::Status s = db.Expire("HDEL_TIMEOUT_KEY", 1);
+storage::Status s = db.Expire("HDEL_TIMEOUT_KEY", 1);
 ASSERT_TRUE(s.ok());
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
std::map<storage::DataType, rocksdb::Status> type_status;
db.Expire("HDEL_TIMEOUT_KEY", 1);
ASSERT_TRUE(type_status[storage::DataType::kHashes].ok());
std::this_thread::sleep_for(std::chrono::milliseconds(2000));
storage::Status s = db.Expire("HDEL_TIMEOUT_KEY", 1);
ASSERT_TRUE(s.ok());
std::this_thread::sleep_for(std::chrono::milliseconds(2000));

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 36

🧹 Outside diff range and nitpick comments (15)
include/pika_command.h (1)

316-316: New CmdFlags enum value added for PKHash

The addition of kCmdFlagsPKHash to the CmdFlags enum is consistent with the introduction of new PKHash commands. This flag will likely be used to identify and handle PKHash-specific operations.

Consider removing the TODO comment // TODO(DDD) as it doesn't provide any meaningful information. If there's a specific task related to this flag, it would be better to create a separate issue for tracking.

include/pika_pkhash.h (3)

183-183: Consider translating code comments to English for consistency

The comment at line 183 is in Chinese: // 每个命令的参数组成不同。 To maintain consistency and readability across the codebase, it's recommended to use English for code comments.


323-323: Reminder: Address the TODO comment and consider translating it to English

The comment // TODO(DDD):接受 ttl 参数。 indicates a pending implementation to accept the TTL parameter. Please implement this functionality and consider translating the comment to English for consistency.

Would you like assistance in implementing the TTL parameter handling or opening a GitHub issue to track this task?


483-487: Inconsistent method declaration for Clear() method

In PKHScanCmd, the Clear() method is declared as virtual void Clear() {}, whereas in other classes it is declared as void Clear() override {}. For consistency and to ensure correct overriding of the base class method, consider using the override keyword.

src/pika_pkhash.cc (1)

505-505: Unresolved TODO comment in PKHMSetexCmd::Do

A TODO comment is present: // TODO(DDD) 这个是干啥的? indicating uncertainty about the purpose of AddSlotKey("h", key_, db_);. This should be addressed to ensure code clarity.

Would you like assistance in determining the purpose of AddSlotKey here or help in resolving this TODO?

src/storage/src/redis.h (3)

86-87: Align member initializer list with declaration order.

In the constructor of KeyStatisticsDurationGuard, the initializer list is:

KeyStatisticsDurationGuard(Redis* that, const DataType type, const std::string& key)
    : ctx(that), key(key), start_us(pstd::NowMicros()), dtype(type) {}

Consider reordering the initializer list to match the order of member declarations to improve readability and prevent potential compiler warnings.


148-149: Consistent parameter passing in Append method.

The Append method signature is:

Status Append(const Slice& key, const Slice& value, int32_t* ret, int64_t* expired_timestamp_sec,
              std::string& out_new_value);

For consistency and clarity, consider passing out_new_value as a pointer (std::string*) instead of a reference, aligning with how expired_timestamp_sec is passed.


532-533: Clean up commented code or provide clarification.

The code contains commented lines:

// TODO(wangshaoyi): seperate env for each rocksdb instance
//  rocksdb::Env* env_ = nullptr;

Consider:

  • Removing the commented code if it's obsolete.
  • Providing a clearer TODO comment or tracking it in an issue if it's a planned future enhancement.

Unresolved TODOs can clutter the codebase and may lead to confusion.

src/storage/include/storage/storage.h (2)

Line range hint 98-102: Fix the typo in invaild_keys to invalid_keys

The member variable invaild_keys is misspelled. Please correct it to invalid_keys throughout the code to prevent confusion and maintain code quality.

Apply the following diff:

 struct KeyInfo {
   uint64_t keys = 0;
   uint64_t expires = 0;
   uint64_t avg_ttl = 0;
-  uint64_t invaild_keys = 0;
+  uint64_t invalid_keys = 0;

   KeyInfo() : keys(0), expires(0), avg_ttl(0), invaild_keys(0) {}

   KeyInfo(uint64_t k, uint64_t e, uint64_t a, uint64_t i)
-      : keys(k), expires(e), avg_ttl(a), invaild_keys(i) {}
+      : keys(k), expires(e), avg_ttl(a), invalid_keys(i) {}

   KeyInfo operator+(const KeyInfo& info) {
     KeyInfo res;
     res.keys = keys + info.keys;
     res.expires = expires + info.expires;
     res.avg_ttl = avg_ttl + info.avg_ttl;
-    res.invaild_keys = invaild_keys + info.invaild_keys;
+    res.invalid_keys = invalid_keys + info.invalid_keys;
     return res;
   }
 };

Line range hint 98-102: Correct the calculation of avg_ttl when combining KeyInfo instances

Adding avg_ttl directly may not compute the accurate average when combining KeyInfo objects. Since avg_ttl represents an average, you should calculate the combined average based on the total accumulated TTL and the total number of keys.

Consider modifying the KeyInfo structure to keep track of the total TTL and total key count, then compute avg_ttl accordingly.

src/storage/src/redis_pkhashes.cc (1)

75-77: Initialize meta_value_buf appropriately

The buffer meta_value_buf[4] is initialized with {0}, but ensure that it's properly set before being used in EncodeFixed32. Uninitialized or partially initialized buffers can lead to undefined behavior.

src/storage/src/storage.cc (1)

540-543: Remove commented-out code for clarity.

The code block for PKHLenForce is commented out. If this code is no longer needed, consider removing it to improve readability and maintainability.

src/pika_command.cc (3)

555-556: Translate comments to English and address the questions

The comments on lines 555-556 are in Chinese and include TODOs:

// TODO(DDD) 为啥vals是慢的命令。
// TODO(DDD) 这些标志位都是啥意思。

Please translate these comments to English and address the concerns raised about why Vals is marked as a slow command and clarify the meanings of the command flags.

Do you need assistance in determining the command performance classifications and understanding the flag definitions?


558-559: Clarify command flags for PKHValsCmd

Additionally, there is uncertainty about the meaning of the command flags:

// TODO(DDD) 这些标志位都是啥意思。

Please translate this comment to English and ensure that the flags used for PKHValsCmd accurately reflect its behavior and performance characteristics.

Would you like assistance in reviewing the command flags and ensuring they align with the intended functionality?


962-963: Improve formatting of the constructor initialization list

The initialization list in the Cmd constructor spans a single line, which can reduce readability:

    : name_(std::move(name)), arity_(arity), flag_(flag), aclCategory_(aclCategory), cache_missed_in_rtc_(false) {}

Consider formatting each member initialization on a separate line for better readability.

Apply this diff to enhance readability:

-    : name_(std::move(name)), arity_(arity), flag_(flag), aclCategory_(aclCategory), cache_missed_in_rtc_(false) {}
+    : name_(std::move(name)),
+      arity_(arity),
+      flag_(flag),
+      aclCategory_(aclCategory),
+      cache_missed_in_rtc_(false) {}
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 8ba7edd and 1c6ef46.

📒 Files selected for processing (9)
  • include/pika_command.h (5 hunks)
  • include/pika_pkhash.h (1 hunks)
  • src/pika_command.cc (10 hunks)
  • src/pika_pkhash.cc (1 hunks)
  • src/storage/include/storage/storage.h (18 hunks)
  • src/storage/src/redis.h (16 hunks)
  • src/storage/src/redis_pkhashes.cc (1 hunks)
  • src/storage/src/storage.cc (34 hunks)
  • src/storage/tests/pkhashes_test.cc (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/storage/tests/pkhashes_test.cc
🧰 Additional context used
🪛 cppcheck
src/storage/src/redis_pkhashes.cc

[performance] 554-554: Searching before insertion is not necessary.

(stlFindInsert)


[performance] 763-763: Searching before insertion is not necessary.

(stlFindInsert)


[performance] 847-847: Searching before insertion is not necessary.

(stlFindInsert)

🔇 Additional comments (21)
include/pika_command.h (2)

141-160: New PKHash command constants added

The addition of these new command constants for PKHash operations enhances the functionality of the Pika database. These constants align with the PR objectives, which mentioned introducing new commands for hash operations.


315-315: Minor formatting changes

These changes improve code readability by adding a comma at the end of the kCmdFlagsSlow enum value and adjusting the indentation of a member variable in the UnblockTaskArgs struct.

Also applies to: 506-506

src/pika_pkhash.cc (9)

63-65: Potential error in parsing timestamp value in PKHExpireatCmd::DoInitial

As with previous parsing issues, ensure that the condition correctly checks the return value of pstd::string2int when parsing timestamp_.


68-73: Unnecessary iterator increments in PKHExpireatCmd::DoInitial

Similar to PKHExpireCmd, remove unnecessary iter++; statements to improve code clarity.


74-76: Potential error in parsing numfields_ in PKHExpireatCmd::DoInitial

Ensure that the parsing condition for numfields_ correctly checks for successful parsing.


110-113: Potential error in parsing numfields_ in PKHExpiretimeCmd::DoInitial

Check that the return value of pstd::string2int is correctly evaluated when parsing numfields_.


145-147: Potential error in parsing numfields_ in PKHPersistCmd::DoInitial

Ensure that you are correctly checking the return value of pstd::string2int when parsing numfields_.


178-181: Potential error in parsing numfields_ in PKHTTLCmd::DoInitial

As in previous comments, verify that the parsing of numfields_ uses the correct condition for error checking.


182-183: Assignment to fields_ may be incorrect in PKHTTLCmd::DoInitial

Ensure that fields_ is assigned correctly after adjusting for any changes to iter.


216-217: ⚠️ Potential issue

Response for not found key should be $-1 in PKHGetCmd::Do

When the key or field is not found, the response is currently set to "$-1". According to the Redis protocol, the correct response should be an empty bulk string.

Apply this diff to correct the response:

-  res_.AppendContent("$-1");
+  res_.AppendContent("$-1\r\n");

Also, ensure that the response formatting complies with the Redis RESP protocol.

Likely invalid or redundant comment.


619-656: ⚠️ Potential issue

Potential response size exceeding limit in PKHGetAllCmd::Do

When accumulating the raw response data, there is a check against raw_limit, but if the limit is exceeded, the function returns an error without clearing or resetting the accumulated data. This may lead to inconsistent states.

Ensure that when the response size exceeds the limit, appropriate action is taken to handle or truncate the response safely.

Run the following script to verify the handling of large responses:

src/storage/src/redis.h (3)

17-18: Verify the necessity of the new include statements.

The added include statements are:

#include "pstd/include/env.h"
#include "pstd/include/pika_codis_slot.h"

Please ensure these headers are required for the new functionality. Unnecessary includes can increase compilation time and introduce unintended dependencies.


252-295: Confirm implementation consistency of PK Hash commands.

New PK Hash methods have been added, such as:

  • Status PKHExpire(...)
  • Status PKHGet(...)
  • Status PKHSet(...)
  • ...

Please verify that:

  • Method signatures are consistent with existing patterns in the class.
  • Parameter types and names are appropriate.
  • Documentation/comments are provided for new methods.
  • Implementation of these methods in the corresponding source files is complete and correct.

471-472: Review handling of DataType::kPKHashes in ExpectedStale method.

The ExpectedStale method has been updated:

case DataType::kZSets:
case DataType::kSets:
case DataType::kHashes:
case DataType::kPKHashes: {
    ParsedBaseMetaValue parsed_meta_value(meta_value);
    return (parsed_meta_value.IsStale() || parsed_meta_value.Count() == 0);
}

Ensure that:

  • The logic for DataType::kPKHashes correctly assesses staleness.
  • All other methods that handle data types are updated accordingly to include kPKHashes.
src/storage/src/redis_pkhashes.cc (1)

128-136: ⚠️ Potential issue

Ensure consistent use of base_meta_key

In the s.IsNotFound() block of the PKHSet method, you're using base_meta_key.Encode() when putting the meta value into the batch. Earlier in the code, you used key directly for the same purpose. For consistency and to prevent potential bugs, consider using base_meta_key.Encode() consistently.

Apply this diff to maintain consistency:

- batch.Put(handles_[kMetaCF], key, hashes_meta_value.Encode());
+ batch.Put(handles_[kMetaCF], base_meta_key.Encode(), hashes_meta_value.Encode());

Likely invalid or redundant comment.

src/storage/src/storage.cc (1)

7-21: Include statements and namespace adjustments look good.

The additions of header files and namespace usages are appropriate and aid in code clarity.

src/pika_command.cc (5)

19-19: Approved inclusion of the new header file

The inclusion of "include/pika_pkhash.h" is necessary for the new PKHash commands and is appropriately added.


486-488: Ensure consistent command flags and arity for PKHSetCmd

The PKHSetCmd is correctly initialized. Please confirm that the arity -4 accurately reflects the command's expected arguments and is consistent with similar commands.


Line range hint 1019-1058: Check the logic in DoCommand for potential issues

In the Cmd::DoCommand method, ensure that the conditions and flow for cache reading, updating, and database operations are consistent and handle all necessary cases. Specifically, verify that:

  • The conditions for reading from cache (IsNeedReadCache()) and updating the cache (IsNeedUpdateCache()) are correctly applied.
  • The handling of cache misses (res().CacheMiss()) and the cache_missed_in_rtc_ flag are appropriately integrated into the logic.

To validate the logic flow, you can review the method and test with various command scenarios.


544-546: Confirm command parameters and arity for PKHMSetexCmd

The comment indicates that PKHMSetexCmd may require TTL and the arity might be -6. Currently, it's set to -4. Please verify the required parameters and update the arity and code accordingly.

To check if PKHMSetexCmd requires TTL and confirm its usage, you can use the following script:

#!/bin/bash
# Description: Search for definitions and usages of `PKHMSetexCmd` to confirm parameters.

# Test: Find all instances where `PKHMSetexCmd` is used. Expect: Correct handling of TTL and arity.
rg --type cpp --type h --word-regexp 'PKHMSetexCmd'

516-518: Clarify arity and parameters for PKHSetexCmd

There is uncertainty about the arity of PKHSetexCmd. Currently, it is set to -4. Please verify if this command requires additional parameters (such as TTL) that might affect the arity, and update it accordingly.

To verify the usage of PKHSetexCmd, you can search for its usage patterns:


private:
std::string key_;
bool is_force_; // 是否强制从db中获取
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Uninitialized member variable is_force_

The member variable is_force_ is declared but not initialized. This may lead to undefined behavior if it is used before being assigned a value. Consider initializing it in the constructor or where appropriate.

Comment on lines +254 to +262
int32_t ret = 0;
s_ = db_->storage()->PKHSet(key_, field_, value_, &ret);
if (s_.ok()) {
res_.AppendContent(":" + std::to_string(ret));
AddSlotKey("h", key_, db_);
} else if (s_.IsInvalidArgument()) {
res_.SetRes(CmdRes::kMultiKey);
} else {
res_.SetRes(CmdRes::kErrOther, s_.ToString());
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Missing error handling in PKHSetCmd::Do

The function lacks error handling for potential database operation failures not covered by existing conditions. Ensure that all possible Status cases are properly handled.

Comment on lines +337 to +345
s_ = db_->storage()->HDel(key_, fields_, &deleted_);

if (s_.ok() || s_.IsNotFound()) {
res_.AppendInteger(deleted_);
} else if (s_.IsInvalidArgument()) {
res_.SetRes(CmdRes::kMultiKey);
} else {
res_.SetRes(CmdRes::kErrOther, s_.ToString());
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Incorrect use of HDel in PKHDelCmd::Do

The command PKHDelCmd uses db_->storage()->HDel, which may not be appropriate for PKHDelCmd. Verify that the correct database method is used for PK hash deletion.

If a specific method for PKHDel exists, replace it accordingly.

Comment on lines +425 to +426
s_.ToString().substr(0, std::char_traits<char>::length(ErrTypeMessage)) == ErrTypeMessage) {
res_.SetRes(CmdRes::kMultiKey);
} else if (s_.IsCorruption() && s_.ToString() == "Corruption: hash value is not an integer") {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Inefficient string comparison in PKHIncrbyCmd::Do

The condition uses substr and std::char_traits<char>::length to compare error messages, which can be simplified.

Consider using s_.ToString().starts_with(ErrTypeMessage) for clarity and efficiency if available in your C++ version. Otherwise, use s_.ToString().compare(0, strlen(ErrTypeMessage), ErrTypeMessage) == 0.

Comment on lines +28 to +33
iter++;
iter++;
iter++;
iter++;
iter++;
if (pstd::string2int(argv_[4].data(), argv_[4].size(), &numfields_) == 0) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Unnecessary iterator increments

Multiple iter++; statements are present, but iter is not used afterwards. These increments do not affect the parsing logic and can be removed to improve code clarity.

Apply this diff to remove unnecessary increments:

-  iter++;
-  iter++;
-  iter++;
-  iter++;
-  iter++;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
iter++;
iter++;
iter++;
iter++;
iter++;
if (pstd::string2int(argv_[4].data(), argv_[4].size(), &numfields_) == 0) {
if (pstd::string2int(argv_[4].data(), argv_[4].size(), &numfields_) == 0) {

Comment on lines 284 to 285
Status Storage::Append(const Slice& key, const Slice& value, int32_t* ret, int64_t* expired_timestamp_sec,
std::string& out_new_value) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Ensure consistent parameter passing style for output parameters.

In the Append method, out_new_value is passed as a non-const reference (std::string&). For consistency with other methods that use pointers for output parameters (e.g., std::string* value), consider changing out_new_value to a pointer (std::string* out_new_value).

Apply this diff to update the parameter:

-Status Storage::Append(const Slice& key, const Slice& value, int32_t* ret, int64_t* expired_timestamp_sec,
-                       std::string& out_new_value) {
+Status Storage::Append(const Slice& key, const Slice& value, int32_t* ret, int64_t* expired_timestamp_sec,
+                       std::string* out_new_value) {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Status Storage::Append(const Slice& key, const Slice& value, int32_t* ret, int64_t* expired_timestamp_sec,
std::string& out_new_value) {
Status Storage::Append(const Slice& key, const Slice& value, int32_t* ret, int64_t* expired_timestamp_sec,
std::string* out_new_value) {

Comment on lines 520 to 533
Status Storage::PKHSetex(const Slice& key, const Slice& field, const Slice& value, int32_t ttl) {
auto& inst = GetDBInstance(key);
return inst->PKHSetex(key, field, value, ttl);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Use int64_t for TTL parameter to accommodate larger expiration times.

In the PKHSetex method, the TTL parameter ttl is currently an int32_t. For consistency and to support larger TTL values, consider changing it to int64_t.

Apply this diff:

-Status Storage::PKHSetex(const Slice& key, const Slice& field, const Slice& value, int32_t ttl) {
+Status Storage::PKHSetex(const Slice& key, const Slice& field, const Slice& value, int64_t ttl) {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Status Storage::PKHSetex(const Slice& key, const Slice& field, const Slice& value, int32_t ttl) {
auto& inst = GetDBInstance(key);
return inst->PKHSetex(key, field, value, ttl);
}
Status Storage::PKHSetex(const Slice& key, const Slice& field, const Slice& value, int64_t ttl) {
auto& inst = GetDBInstance(key);
return inst->PKHSetex(key, field, value, ttl);
}

Comment on lines +550 to +558
Status Storage::PKHIncrby(const Slice& key, const Slice& field, int64_t value, int64_t* ret, int32_t ttl) {
auto& inst = GetDBInstance(key);
return inst->PKHIncrby(key, field, value, ret, ttl);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Use int64_t for TTL parameter to support larger values.

In the PKHIncrby method, the TTL parameter ttl is of type int32_t. To handle larger TTL values and maintain consistency with other methods like Setex, consider changing ttl to int64_t.

Apply this diff:

-Status Storage::PKHIncrby(const Slice& key, const Slice& field, int64_t value, int64_t* ret, int32_t ttl) {
+Status Storage::PKHIncrby(const Slice& key, const Slice& field, int64_t value, int64_t* ret, int64_t ttl) {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Status Storage::PKHIncrby(const Slice& key, const Slice& field, int64_t value, int64_t* ret, int32_t ttl) {
auto& inst = GetDBInstance(key);
return inst->PKHIncrby(key, field, value, ret, ttl);
}
Status Storage::PKHIncrby(const Slice& key, const Slice& field, int64_t value, int64_t* ret, int64_t ttl) {
auto& inst = GetDBInstance(key);
return inst->PKHIncrby(key, field, value, ret, ttl);
}

Comment on lines +480 to +494
Status Storage::PKHExpire(const Slice& key, int32_t ttl, int32_t numfields, const std::vector<std::string>& fields,
std::vector<int32_t>* rets) {
auto& inst = GetDBInstance(key);
return inst->PKHExpire(key, ttl, numfields, fields, rets);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider using int64_t for TTL parameter in PKHExpire.

The PKHExpire method uses int32_t ttl as the TTL parameter. To support larger TTL values and align with other methods, consider changing ttl to int64_t.

Apply this diff:

-Status Storage::PKHExpire(const Slice& key, int32_t ttl, int32_t numfields, const std::vector<std::string>& fields,
+Status Storage::PKHExpire(const Slice& key, int64_t ttl, int32_t numfields, const std::vector<std::string>& fields,
                           std::vector<int32_t>* rets) {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Status Storage::PKHExpire(const Slice& key, int32_t ttl, int32_t numfields, const std::vector<std::string>& fields,
std::vector<int32_t>* rets) {
auto& inst = GetDBInstance(key);
return inst->PKHExpire(key, ttl, numfields, fields, rets);
}
Status Storage::PKHExpire(const Slice& key, int64_t ttl, int32_t numfields, const std::vector<std::string>& fields,
std::vector<int32_t>* rets) {
auto& inst = GetDBInstance(key);
return inst->PKHExpire(key, ttl, numfields, fields, rets);
}

Comment on lines +557 to +555
std::unique_ptr<Cmd> ehvalsptr = std::make_unique<PKHValsCmd>(
kCmdNamePKHVals, 2, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsSlow);
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHVals, std::move(ehvalsptr)));
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Address the TODO and clarify command performance for PKHValsCmd

The comment questions why Vals is marked as a slow command:

// TODO(DDD) 为啥vals是慢的命令。

Please translate the comment to English and verify whether PKHValsCmd should be marked with kCmdFlagsSlow. If the PKHVals operation is time-consuming due to data size or processing complexity, marking it as slow may be appropriate.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 16

🧹 Outside diff range and nitpick comments (17)
CMakeLists.txt (3)

39-39: Consider making the build type configurable.

Setting CMAKE_BUILD_TYPE to "Debug" is useful for development but may not be suitable for all build scenarios. This setting enables debug symbols and disables optimizations, which can impact performance in production environments.

Consider making this configurable, for example:

if(NOT CMAKE_BUILD_TYPE)
  set(CMAKE_BUILD_TYPE "Debug")
endif()

This allows users to override the build type when running CMake, while defaulting to "Debug" if not specified.


40-40: Consider conditionally setting compiler flags based on build type.

The current setup applies debug-oriented compiler flags globally, which may not be suitable for all build scenarios, especially for release builds.

Consider setting these flags conditionally based on the build type. For example:

if(CMAKE_BUILD_TYPE STREQUAL "Debug")
  set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -O0 -Wall -g3 -ggdb -fno-inline -fno-builtin-memcmp")
else()
  set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -O2 -Wall")
endif()

This approach allows for different compiler flags for debug and release builds, providing better performance in release mode while maintaining debugging capabilities in debug mode.


38-40: Review the impact of global debug settings on build process.

The changes made to CMAKE_BUILD_TYPE and CMAKE_CXX_FLAGS are geared towards debugging. While this is beneficial for development, it may have unintended consequences:

  1. Performance impact in non-debug scenarios.
  2. Potential issues when building for different environments (e.g., production).

Consider implementing a more flexible configuration system that allows for easy switching between debug and release builds. This could involve:

  1. Using CMake options to control build type.
  2. Implementing separate flag sets for debug and release builds.
  3. Allowing for easy overrides of these settings from the command line or a configuration file.

These changes will make the build system more robust and adaptable to different use cases.

src/storage/src/redis.h (4)

116-120: LGTM: Consistent addition of optional prefetch_meta parameter.

The addition of the std::string&& prefetch_meta = {} parameter to multiple methods is a good optimization, allowing pre-fetched metadata to be passed in. This change is consistent across various methods and maintains backwards compatibility.

Consider adding documentation comments for the prefetch_meta parameter to explain its purpose and usage.

Also applies to: 386-387, 396-397


255-297: LGTM: New PK Hash Commands added consistently.

The addition of PK Hash Commands (PKHExpire, PKHGet, PKHSet, etc.) extends the existing Hash functionality. The method signatures are consistent with the existing codebase style and follow a clear naming convention.

Consider adding documentation comments for these new PK Hash methods to explain their purpose, how they differ from regular Hash commands, and any specific usage guidelines.


278-278: Clarify the status of the commented-out PKHLenForce method.

The PKHLenForce method is currently commented out. If this method is planned for future implementation, consider adding a TODO comment explaining its purpose and when it will be implemented. If it's no longer needed, it may be better to remove it entirely to avoid confusion.


299-300: Clarify the status of the commented-out PKHScanx method.

The PKHScanx method is currently commented out. If this method is planned for future implementation, consider adding a TODO comment explaining its purpose and when it will be implemented. If it's no longer needed, it may be better to remove it entirely to avoid confusion.

src/storage/src/redis_pkhashes.cc (5)

146-207: Remove unused variable and commented-out code

The PKHExpire function is well-implemented, correctly handling the expiration setting for specific fields in a hash. However, there are a few minor issues to address:

  1. The is_stale variable (line 156) is declared but never used.
  2. There are commented-out lines (160-161) that seem to be leftover from development.

Consider applying the following changes:

  1. Remove the unused is_stale variable:
-  bool is_stale = false;
  1. Remove the commented-out code:
-  // const rocksdb::Snapshot* snapshot;
-  // ScopeSnapshot ss(db_, &snapshot);

These changes will improve code cleanliness and remove potential confusion for future developers.


209-276: Remove unused variable

The PKHExpireat function is well-implemented, correctly handling the expiration setting for specific fields in a hash using an absolute timestamp. However, there's an unused variable that should be removed:

The is_stale variable (line 226) is declared but never used.

Consider removing the unused variable:

-  bool is_stale = false;

This change will improve code cleanliness and remove potential confusion for future developers.


278-335: Remove unused variable and unnecessary WriteBatch

The PKHExpiretime function is well-implemented, correctly retrieving expiration times for specific fields in a hash. However, there are two minor issues to address:

  1. The is_stale variable (line 283) is declared but never used.
  2. A WriteBatch batch (line 280) is declared but never used in this read-only operation.

Consider applying the following changes:

  1. Remove the unused is_stale variable:
-  bool is_stale = false;
  1. Remove the unnecessary WriteBatch declaration:
-  rocksdb::WriteBatch batch;

These changes will improve code cleanliness and remove potential confusion for future developers.


337-397: Remove unused variable and unnecessary WriteBatch

The PKHTTL function is well-implemented, correctly retrieving TTL (Time To Live) for specific fields in a hash. However, there are two minor issues to address:

  1. The is_stale variable (line 342) is declared but never used.
  2. A WriteBatch batch (line 339) is declared but never used in this read-only operation.

Consider applying the following changes:

  1. Remove the unused is_stale variable:
-  bool is_stale = false;
  1. Remove the unnecessary WriteBatch declaration:
-  rocksdb::WriteBatch batch;

These changes will improve code cleanliness and remove potential confusion for future developers.


399-454: Remove unused variable

The PKHPersist function is well-implemented, correctly removing expiration for specific fields in a hash. However, there's one minor issue to address:

The is_stale variable (line 404) is declared but never used.

Consider removing the unused variable:

-  bool is_stale = false;

This change will improve code cleanliness and remove potential confusion for future developers.

src/storage/src/redis_strings.cc (1)

Line range hint 1-1746: Overall code improvements with room for further enhancements

The changes in this file primarily focus on improving code readability through better formatting of function signatures and code blocks. These changes are positive and make the code easier to understand and maintain.

However, there's a consistent pattern of using string concatenation for error messages throughout the file. While this works, it's recommended to use a formatting library like fmt or std::format (C++20) for better performance and maintainability of error messages.

Consider applying the formatting library suggestion consistently across all error messages in this file and potentially throughout the entire codebase for uniformity.

src/storage/src/pkhash_data_value_format.h (2)

47-47: Mixed language in code comment.

The comment on line 47 contains Chinese characters:

dst += kTimestampLength;  // todo(DDD) 待确认,看这个是否需要。

To maintain consistency and ensure readability for all contributors, please provide comments in English.

Update the comment to English or address the TODO if it's no longer necessary:

-    dst += kTimestampLength;  // todo(DDD) 待确认,看这个是否需要。
+    dst += kTimestampLength;  // TODO(DDD): Confirm if this increment is necessary.

53-53: Remove unused private member variable kDefaultValueSuffixLength.

The private member kDefaultValueSuffixLength declared on line 53 is not used within the PKHashDataValue class. Unused code can lead to confusion and should be removed to improve code clarity.

Apply this diff to remove the unused variable:

-  const size_t kDefaultValueSuffixLength = kSuffixReserveLength + kTimestampLength * 2;
src/storage/tests/pkhashes_test.cc (2)

104-106: Translate comments to English for consistency

There are several comments written in Chinese. For better maintainability and to ensure all team members can understand the code, please translate these comments to English.

Example lines:

  • Lines 104-106: Multiline comment in Chinese.
  • Line 328: // 从这里开始有问题。
  • Line 344: // 实际上是 key value 过期了。
  • Line 375: // TODO(DDD:expire)这里可以仿着写 expire的相关测试。

Also applies to: 328-328, 344-344, 375-375


375-375: Implement expire-related tests as indicated by TODO

There is a TODO comment on line 375 suggesting that expire-related tests should be written. Implementing these tests will help ensure the correctness of the expiration functionality.

Would you like me to assist in generating these test cases or open a new GitHub issue to track this task?

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 1c6ef46 and 0ec9b19.

📒 Files selected for processing (6)
  • CMakeLists.txt (1 hunks)
  • src/storage/src/pkhash_data_value_format.h (1 hunks)
  • src/storage/src/redis.h (17 hunks)
  • src/storage/src/redis_pkhashes.cc (1 hunks)
  • src/storage/src/redis_strings.cc (40 hunks)
  • src/storage/tests/pkhashes_test.cc (1 hunks)
🧰 Additional context used
🪛 cppcheck
src/storage/src/redis_pkhashes.cc

[performance] 554-554: Searching before insertion is not necessary.

(stlFindInsert)


[performance] 763-763: Searching before insertion is not necessary.

(stlFindInsert)


[performance] 847-847: Searching before insertion is not necessary.

(stlFindInsert)

🔇 Additional comments (17)
src/storage/src/redis.h (2)

17-19: LGTM: Include statements updated appropriately.

The new include statements for pstd/include/env.h, pstd/include/pika_codis_slot.h, and src/custom_comparator.h have been added, which are likely necessary for the new PK Hash functionality. The src/redis_streams.h include has been moved, which is a minor organizational change.

Also applies to: 24-24


474-475: LGTM: ExpectedStale method updated to include DataType::kPKHashes.

The ExpectedStale method has been correctly updated to include the new DataType::kPKHashes in its switch statement. This ensures consistent handling of the new PK Hash type alongside existing types.

src/storage/src/redis_pkhashes.cc (6)

23-64: LGTM: Well-structured implementation of PKHGet

The PKHGet function is implemented correctly, with proper error handling, type checking, and use of snapshots for consistency. It efficiently retrieves the hash field value while handling various edge cases such as stale data and non-existent keys.


542-545: LGTM: Efficient implementation of PKHExists

The PKHExists function is implemented efficiently by reusing the PKHGet function. This approach reduces code duplication and maintains consistency in behavior between the two operations.


620-654: LGTM: Efficient implementation of PKHLen

The PKHLen function is well-implemented, efficiently handling prefetched metadata and various error cases. It correctly returns the number of fields in a hash while properly managing different scenarios such as stale or non-existent keys.


658-667: LGTM: Efficient implementation of PKHStrlen

The PKHStrlen function is implemented efficiently by reusing the PKHGet function to retrieve the value and then returning its length. This approach reduces code duplication and maintains consistency in behavior between the two operations.


942-1274: LGTM: Well-implemented hash operations

The remaining functions (PKHMGet, PKHKeys, PKHVals, PKHGetall, PKHScan, PKHashesExpire) are well-implemented, correctly handling various operations on hash data structures. They follow consistent patterns, handle error cases appropriately, and use suitable data structures for their operations.

Minor suggestions:

  1. Consider adding more detailed comments for complex logic, especially in the PKHScan function.
  2. In the PKHashesExpire function, the TODO comment on line 1263 should be addressed in a future update.

Overall, these functions provide a robust implementation of the required hash operations.


66-144: ⚠️ Potential issue

Consider updating TTL when values are equal

The PKHSet function is well-implemented overall, using a WriteBatch for atomic operations and handling various scenarios correctly. However, there's a potential issue with TTL handling when updating an existing field.

In the case where the new value is equal to the existing value (lines 105-107), the function returns early without updating the TTL or other metadata. If updating the TTL is desired even when the value hasn't changed, you should adjust the logic to handle this case.

Consider modifying the code as follows:

 if (data_value == value.ToString()) {
-  return Status::OK();
+  PKHashDataValue internal_value(value);
+  batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), internal_value.Encode());
+  statistic++;
 } else {
   PKHashDataValue internal_value(value);
   batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), internal_value.Encode());
   statistic++;
 }

This change ensures that the TTL is updated even when the value remains the same.

Likely invalid or redundant comment.

src/storage/src/redis_strings.cc (9)

8-8: Approved: Improved include statement ordering.

Moving the include to the top of the list improves consistency and follows common C++ coding standards.


66-67: Approved: Improved function signature formatting.

The multi-line format for the function signature enhances readability, especially for functions with many parameters.


126-126: Approved: Improved function signature formatting.

The multi-line format for the function signature enhances readability, especially for functions with many parameters.


222-223: Approved: Improved function signature formatting.

The multi-line format for the function signature enhances readability, especially for functions with many parameters.


234-234: Approved: Use of range-based for loop.

The change to a range-based for loop is a good modernization of the code. It improves readability and reduces the potential for errors associated with manual index management.


526-527: Approved: Improved function signature formatting.

The multi-line format for the function signature enhances readability, especially for functions with many parameters.


562-574: Approved: Improved code formatting.

The changes in indentation and line breaks improve the overall readability of the code.


1671-1673: Approved: Improved function signature formatting.

The multi-line format for the function signature enhances readability, especially for functions with many parameters.


1695-1696: Approved: Improved code formatting.

The changes in line breaks improve the overall readability of the code, especially for long conditions in if statements.

Also applies to: 1713-1714

Comment on lines 669 to 777
Status Redis::PKHIncrby(const Slice& key, const Slice& field, int64_t value, int64_t* ret, int32_t ttl) {
*ret = 0;
rocksdb::WriteBatch batch;
ScopeRecordLock l(lock_mgr_, key);

uint64_t version = 0;
uint32_t statistic = 0;
std::string old_value;
std::string meta_value;

BaseMetaKey base_meta_key(key);
Status s = db_->Get(default_read_options_, handles_[kMetaCF], base_meta_key.Encode(), &meta_value);
char value_buf[32] = {0};
char meta_value_buf[4] = {0};
if (s.ok() && !ExpectedMetaValue(DataType::kPKHashes, meta_value)) {
if (ExpectedStale(meta_value)) {
s = Status::NotFound();
} else {
return Status::InvalidArgument("WRONGTYPE, key: " + key.ToString() +
", expect type: " + DataTypeStrings[static_cast<int>(DataType::kPKHashes)] +
", get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]);
}
}
if (s.ok()) {
ParsedHashesMetaValue parsed_hashes_meta_value(&meta_value);
if (parsed_hashes_meta_value.IsStale() || parsed_hashes_meta_value.Count() == 0) {
version = parsed_hashes_meta_value.UpdateVersion();
parsed_hashes_meta_value.SetCount(1);
parsed_hashes_meta_value.SetEtime(0);
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value);
HashesDataKey hashes_data_key(key, version, field);
Int64ToStr(value_buf, 32, value);
PKHashDataValue internal_value(value_buf);
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), internal_value.Encode());
*ret = value;
} else {
version = parsed_hashes_meta_value.Version();
HashesDataKey hashes_data_key(key, version, field);
s = db_->Get(default_read_options_, handles_[kPKHashDataCF], hashes_data_key.Encode(), &old_value);
if (s.ok()) {
ParsedPKHashDataValue parsed_internal_value(&old_value);
parsed_internal_value.StripSuffix();
int64_t ival = 0;
if (StrToInt64(old_value.data(), old_value.size(), &ival) == 0) {
return Status::Corruption("hash value is not an integer");
}
if ((value >= 0 && LLONG_MAX - value < ival) || (value < 0 && LLONG_MIN - value > ival)) {
return Status::InvalidArgument("Overflow");
}
*ret = ival + value;
Int64ToStr(value_buf, 32, *ret);
PKHashDataValue internal_value(value_buf);
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), internal_value.Encode());
statistic++;
} else if (s.IsNotFound()) {
Int64ToStr(value_buf, 32, value);
if (!parsed_hashes_meta_value.CheckModifyCount(1)) {
return Status::InvalidArgument("hash size overflow");
}
PKHashDataValue internal_value(value_buf);
parsed_hashes_meta_value.ModifyCount(1);
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value);
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), internal_value.Encode());
*ret = value;
} else {
return s;
}
}
} else if (s.IsNotFound()) {
EncodeFixed32(meta_value_buf, 1);
HashesMetaValue hashes_meta_value(DataType::kPKHashes, Slice(meta_value_buf, 4));
version = hashes_meta_value.UpdateVersion();
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), hashes_meta_value.Encode());
HashesDataKey hashes_data_key(key, version, field);

Int64ToStr(value_buf, 32, value);
PKHashDataValue internal_value(value_buf);
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), internal_value.Encode());
*ret = value;
} else {
return s;
}
s = db_->Write(default_write_options_, &batch);
UpdateSpecificKeyStatistics(DataType::kPKHashes, key.ToString(), statistic);
return s;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Address unused TTL parameter

The PKHIncrby function is well-implemented, correctly handling the increment operation for hash fields, including various scenarios and overflow cases. However, there's one issue to address:

The ttl parameter (line 669) is not used anywhere in the function. This might be an oversight, as the function doesn't apply any TTL to the incremented field.

Consider one of the following options:

  1. If TTL should be applied, modify the function to use the ttl parameter:
 PKHashDataValue internal_value(value_buf);
+internal_value.SetRelativeTimestamp(ttl);
 batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), internal_value.Encode());

Apply this change in all places where PKHashDataValue is created and stored.

  1. If TTL is not needed for this operation, remove the parameter:
-Status Redis::PKHIncrby(const Slice& key, const Slice& field, int64_t value, int64_t* ret, int32_t ttl) {
+Status Redis::PKHIncrby(const Slice& key, const Slice& field, int64_t value, int64_t* ret) {

Make sure to update any calls to this function accordingly.

Implementing one of these changes will ensure that the function behaves as expected with regard to TTL handling.

Committable suggestion was skipped due to low confidence.

Comment on lines +547 to +641
Status Redis::PKHDel(const Slice& key, const std::vector<std::string>& fields, int32_t* ret) {
uint32_t statistic = 0;
std::vector<std::string> filtered_fields;
std::unordered_set<std::string> field_set;
for (const auto& iter : fields) {
const std::string& field = iter;
if (field_set.find(field) == field_set.end()) {
field_set.insert(field);
filtered_fields.push_back(iter);
}
}

rocksdb::WriteBatch batch;
rocksdb::ReadOptions read_options;
const rocksdb::Snapshot* snapshot;

std::string meta_value;
int32_t del_cnt = 0;
uint64_t version = 0;
ScopeRecordLock l(lock_mgr_, key);
ScopeSnapshot ss(db_, &snapshot);
read_options.snapshot = snapshot;

BaseMetaKey base_meta_key(key);
Status s = db_->Get(read_options, handles_[kMetaCF], base_meta_key.Encode(), &meta_value);
if (s.ok() && !ExpectedMetaValue(DataType::kPKHashes, meta_value)) {
if (ExpectedStale(meta_value)) {
s = Status::NotFound();
} else {
return Status::InvalidArgument("WRONGTYPE, key: " + key.ToString() +
", expect type: " + DataTypeStrings[static_cast<int>(DataType::kPKHashes)] +
", get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]);
}
}
if (s.ok()) {
ParsedHashesMetaValue parsed_hashes_meta_value(&meta_value);
if (parsed_hashes_meta_value.IsStale() || parsed_hashes_meta_value.Count() == 0) {
*ret = 0;
return Status::OK();
} else {
std::string data_value;
version = parsed_hashes_meta_value.Version();
for (const auto& field : filtered_fields) {
HashesDataKey hashes_data_key(key, version, field);
s = db_->Get(read_options, handles_[kPKHashDataCF], hashes_data_key.Encode(), &data_value);
if (s.ok()) {
del_cnt++;
statistic++;
batch.Delete(handles_[kPKHashDataCF], hashes_data_key.Encode());
} else if (s.IsNotFound()) {
continue;
} else {
return s;
}
}
*ret = del_cnt;
if (!parsed_hashes_meta_value.CheckModifyCount(-del_cnt)) {
return Status::InvalidArgument("hash size overflow");
}
parsed_hashes_meta_value.ModifyCount(-del_cnt);
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value);
}
} else if (s.IsNotFound()) {
*ret = 0;
return Status::OK();
} else {
return s;
}
s = db_->Write(default_write_options_, &batch);
UpdateSpecificKeyStatistics(DataType::kPKHashes, key.ToString(), statistic);
return s;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Optimize field filtering process

The PKHDel function is well-implemented, correctly handling the deletion of specified fields from a hash. However, there's an opportunity to optimize the field filtering process:

Consider using std::unordered_set::insert directly instead of searching before insertion. This can be more efficient, especially for larger input sizes. Here's a suggested optimization:

 std::vector<std::string> filtered_fields;
 std::unordered_set<std::string> field_set;
-for (const auto& iter : fields) {
-  const std::string& field = iter;
-  if (field_set.find(field) == field_set.end()) {
-    field_set.insert(field);
-    filtered_fields.push_back(iter);
-  }
+for (const auto& field : fields) {
+  auto result = field_set.insert(field);
+  if (result.second) {
+    filtered_fields.push_back(field);
+  }
 }

This change eliminates the need for an explicit search before insertion, potentially improving performance.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Status Redis::PKHDel(const Slice& key, const std::vector<std::string>& fields, int32_t* ret) {
uint32_t statistic = 0;
std::vector<std::string> filtered_fields;
std::unordered_set<std::string> field_set;
for (const auto& iter : fields) {
const std::string& field = iter;
if (field_set.find(field) == field_set.end()) {
field_set.insert(field);
filtered_fields.push_back(iter);
}
}
rocksdb::WriteBatch batch;
rocksdb::ReadOptions read_options;
const rocksdb::Snapshot* snapshot;
std::string meta_value;
int32_t del_cnt = 0;
uint64_t version = 0;
ScopeRecordLock l(lock_mgr_, key);
ScopeSnapshot ss(db_, &snapshot);
read_options.snapshot = snapshot;
BaseMetaKey base_meta_key(key);
Status s = db_->Get(read_options, handles_[kMetaCF], base_meta_key.Encode(), &meta_value);
if (s.ok() && !ExpectedMetaValue(DataType::kPKHashes, meta_value)) {
if (ExpectedStale(meta_value)) {
s = Status::NotFound();
} else {
return Status::InvalidArgument("WRONGTYPE, key: " + key.ToString() +
", expect type: " + DataTypeStrings[static_cast<int>(DataType::kPKHashes)] +
", get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]);
}
}
if (s.ok()) {
ParsedHashesMetaValue parsed_hashes_meta_value(&meta_value);
if (parsed_hashes_meta_value.IsStale() || parsed_hashes_meta_value.Count() == 0) {
*ret = 0;
return Status::OK();
} else {
std::string data_value;
version = parsed_hashes_meta_value.Version();
for (const auto& field : filtered_fields) {
HashesDataKey hashes_data_key(key, version, field);
s = db_->Get(read_options, handles_[kPKHashDataCF], hashes_data_key.Encode(), &data_value);
if (s.ok()) {
del_cnt++;
statistic++;
batch.Delete(handles_[kPKHashDataCF], hashes_data_key.Encode());
} else if (s.IsNotFound()) {
continue;
} else {
return s;
}
}
*ret = del_cnt;
if (!parsed_hashes_meta_value.CheckModifyCount(-del_cnt)) {
return Status::InvalidArgument("hash size overflow");
}
parsed_hashes_meta_value.ModifyCount(-del_cnt);
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value);
}
} else if (s.IsNotFound()) {
*ret = 0;
return Status::OK();
} else {
return s;
}
s = db_->Write(default_write_options_, &batch);
UpdateSpecificKeyStatistics(DataType::kPKHashes, key.ToString(), statistic);
return s;
}
Status Redis::PKHDel(const Slice& key, const std::vector<std::string>& fields, int32_t* ret) {
uint32_t statistic = 0;
std::vector<std::string> filtered_fields;
std::unordered_set<std::string> field_set;
for (const auto& field : fields) {
auto result = field_set.insert(field);
if (result.second) {
filtered_fields.push_back(field);
}
}
rocksdb::WriteBatch batch;
rocksdb::ReadOptions read_options;
const rocksdb::Snapshot* snapshot;
std::string meta_value;
int32_t del_cnt = 0;
uint64_t version = 0;
ScopeRecordLock l(lock_mgr_, key);
ScopeSnapshot ss(db_, &snapshot);
read_options.snapshot = snapshot;
BaseMetaKey base_meta_key(key);
Status s = db_->Get(read_options, handles_[kMetaCF], base_meta_key.Encode(), &meta_value);
if (s.ok() && !ExpectedMetaValue(DataType::kPKHashes, meta_value)) {
if (ExpectedStale(meta_value)) {
s = Status::NotFound();
} else {
return Status::InvalidArgument("WRONGTYPE, key: " + key.ToString() +
", expect type: " + DataTypeStrings[static_cast<int>(DataType::kPKHashes)] +
", get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]);
}
}
if (s.ok()) {
ParsedHashesMetaValue parsed_hashes_meta_value(&meta_value);
if (parsed_hashes_meta_value.IsStale() || parsed_hashes_meta_value.Count() == 0) {
*ret = 0;
return Status::OK();
} else {
std::string data_value;
version = parsed_hashes_meta_value.Version();
for (const auto& field : filtered_fields) {
HashesDataKey hashes_data_key(key, version, field);
s = db_->Get(read_options, handles_[kPKHashDataCF], hashes_data_key.Encode(), &data_value);
if (s.ok()) {
del_cnt++;
statistic++;
batch.Delete(handles_[kPKHashDataCF], hashes_data_key.Encode());
} else if (s.IsNotFound()) {
continue;
} else {
return s;
}
}
*ret = del_cnt;
if (!parsed_hashes_meta_value.CheckModifyCount(-del_cnt)) {
return Status::InvalidArgument("hash size overflow");
}
parsed_hashes_meta_value.ModifyCount(-del_cnt);
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value);
}
} else if (s.IsNotFound()) {
*ret = 0;
return Status::OK();
} else {
return s;
}
s = db_->Write(default_write_options_, &batch);
UpdateSpecificKeyStatistics(DataType::kPKHashes, key.ToString(), statistic);
return s;
}
🧰 Tools
🪛 cppcheck

[performance] 554-554: Searching before insertion is not necessary.

(stlFindInsert)

Comment on lines +756 to +861
Status Redis::PKHMSet(const Slice& key, const std::vector<FieldValue>& fvs) {
uint32_t statistic = 0;
std::unordered_set<std::string> fields;
std::vector<FieldValue> filtered_fvs;
for (auto iter = fvs.rbegin(); iter != fvs.rend(); ++iter) {
std::string field = iter->field;
if (fields.find(field) == fields.end()) {
fields.insert(field);
filtered_fvs.push_back(*iter);
}
}

rocksdb::WriteBatch batch;
ScopeRecordLock l(lock_mgr_, key);

uint64_t version = 0;
std::string meta_value;

BaseMetaKey base_meta_key(key);
Status s = db_->Get(default_read_options_, handles_[kMetaCF], base_meta_key.Encode(), &meta_value);
char meta_value_buf[4] = {0};
if (s.ok() && !ExpectedMetaValue(DataType::kPKHashes, meta_value)) {
if (ExpectedStale(meta_value)) {
s = Status::NotFound();
} else {
return Status::InvalidArgument("WRONGTYPE, key: " + key.ToString() +
", expect type: " + DataTypeStrings[static_cast<int>(DataType::kPKHashes)] +
", get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]);
}
}
if (s.ok()) {
ParsedHashesMetaValue parsed_hashes_meta_value(&meta_value);
if (parsed_hashes_meta_value.IsStale() || parsed_hashes_meta_value.Count() == 0) {
version = parsed_hashes_meta_value.InitialMetaValue();
if (!parsed_hashes_meta_value.check_set_count(static_cast<int32_t>(filtered_fvs.size()))) {
return Status::InvalidArgument("hash size overflow");
}
parsed_hashes_meta_value.SetCount(static_cast<int32_t>(filtered_fvs.size()));
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value);
for (const auto& fv : filtered_fvs) {
HashesDataKey hashes_data_key(key, version, fv.field);
PKHashDataValue inter_value(fv.value);
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), inter_value.Encode());
}
} else {
int32_t count = 0;
std::string data_value;
version = parsed_hashes_meta_value.Version();
for (const auto& fv : filtered_fvs) {
HashesDataKey hashes_data_key(key, version, fv.field);
PKHashDataValue inter_value(fv.value);
s = db_->Get(default_read_options_, handles_[kPKHashDataCF], hashes_data_key.Encode(), &data_value);
if (s.ok()) {
statistic++;
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), inter_value.Encode());
} else if (s.IsNotFound()) {
count++;
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), inter_value.Encode());
} else {
return s;
}
}
if (!parsed_hashes_meta_value.CheckModifyCount(count)) {
return Status::InvalidArgument("hash size overflow");
}
parsed_hashes_meta_value.ModifyCount(count);
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value);
}
} else if (s.IsNotFound()) {
EncodeFixed32(meta_value_buf, filtered_fvs.size());
HashesMetaValue hashes_meta_value(DataType::kPKHashes, Slice(meta_value_buf, 4));
version = hashes_meta_value.UpdateVersion();
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), hashes_meta_value.Encode());
for (const auto& fv : filtered_fvs) {
HashesDataKey hashes_data_key(key, version, fv.field);
PKHashDataValue inter_value(fv.value);
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), inter_value.Encode());
}
}
s = db_->Write(default_write_options_, &batch);
UpdateSpecificKeyStatistics(DataType::kPKHashes, key.ToString(), statistic);
return s;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Optimize field filtering process

The PKHMSet function is well-implemented, correctly handling the setting of multiple fields in a hash. However, there's an opportunity to optimize the field filtering process:

Consider using std::unordered_set::insert directly instead of searching before insertion. This can be more efficient, especially for larger input sizes. Here's a suggested optimization:

 std::vector<FieldValue> filtered_fvs;
 std::unordered_set<std::string> fields;
-for (auto iter = fvs.rbegin(); iter != fvs.rend(); ++iter) {
-  std::string field = iter->field;
-  if (fields.find(field) == fields.end()) {
-    fields.insert(field);
-    filtered_fvs.push_back(*iter);
-  }
+for (auto iter = fvs.rbegin(); iter != fvs.rend(); ++iter) {
+  auto result = fields.insert(iter->field);
+  if (result.second) {
+    filtered_fvs.push_back(*iter);
+  }
 }

This change eliminates the need for an explicit search before insertion, potentially improving performance.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Status Redis::PKHMSet(const Slice& key, const std::vector<FieldValue>& fvs) {
uint32_t statistic = 0;
std::unordered_set<std::string> fields;
std::vector<FieldValue> filtered_fvs;
for (auto iter = fvs.rbegin(); iter != fvs.rend(); ++iter) {
std::string field = iter->field;
if (fields.find(field) == fields.end()) {
fields.insert(field);
filtered_fvs.push_back(*iter);
}
}
rocksdb::WriteBatch batch;
ScopeRecordLock l(lock_mgr_, key);
uint64_t version = 0;
std::string meta_value;
BaseMetaKey base_meta_key(key);
Status s = db_->Get(default_read_options_, handles_[kMetaCF], base_meta_key.Encode(), &meta_value);
char meta_value_buf[4] = {0};
if (s.ok() && !ExpectedMetaValue(DataType::kPKHashes, meta_value)) {
if (ExpectedStale(meta_value)) {
s = Status::NotFound();
} else {
return Status::InvalidArgument("WRONGTYPE, key: " + key.ToString() +
", expect type: " + DataTypeStrings[static_cast<int>(DataType::kPKHashes)] +
", get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]);
}
}
if (s.ok()) {
ParsedHashesMetaValue parsed_hashes_meta_value(&meta_value);
if (parsed_hashes_meta_value.IsStale() || parsed_hashes_meta_value.Count() == 0) {
version = parsed_hashes_meta_value.InitialMetaValue();
if (!parsed_hashes_meta_value.check_set_count(static_cast<int32_t>(filtered_fvs.size()))) {
return Status::InvalidArgument("hash size overflow");
}
parsed_hashes_meta_value.SetCount(static_cast<int32_t>(filtered_fvs.size()));
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value);
for (const auto& fv : filtered_fvs) {
HashesDataKey hashes_data_key(key, version, fv.field);
PKHashDataValue inter_value(fv.value);
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), inter_value.Encode());
}
} else {
int32_t count = 0;
std::string data_value;
version = parsed_hashes_meta_value.Version();
for (const auto& fv : filtered_fvs) {
HashesDataKey hashes_data_key(key, version, fv.field);
PKHashDataValue inter_value(fv.value);
s = db_->Get(default_read_options_, handles_[kPKHashDataCF], hashes_data_key.Encode(), &data_value);
if (s.ok()) {
statistic++;
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), inter_value.Encode());
} else if (s.IsNotFound()) {
count++;
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), inter_value.Encode());
} else {
return s;
}
}
if (!parsed_hashes_meta_value.CheckModifyCount(count)) {
return Status::InvalidArgument("hash size overflow");
}
parsed_hashes_meta_value.ModifyCount(count);
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value);
}
} else if (s.IsNotFound()) {
EncodeFixed32(meta_value_buf, filtered_fvs.size());
HashesMetaValue hashes_meta_value(DataType::kPKHashes, Slice(meta_value_buf, 4));
version = hashes_meta_value.UpdateVersion();
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), hashes_meta_value.Encode());
for (const auto& fv : filtered_fvs) {
HashesDataKey hashes_data_key(key, version, fv.field);
PKHashDataValue inter_value(fv.value);
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), inter_value.Encode());
}
}
s = db_->Write(default_write_options_, &batch);
UpdateSpecificKeyStatistics(DataType::kPKHashes, key.ToString(), statistic);
return s;
}
Status Redis::PKHMSet(const Slice& key, const std::vector<FieldValue>& fvs) {
uint32_t statistic = 0;
std::unordered_set<std::string> fields;
std::vector<FieldValue> filtered_fvs;
for (auto iter = fvs.rbegin(); iter != fvs.rend(); ++iter) {
auto result = fields.insert(iter->field);
if (result.second) {
filtered_fvs.push_back(*iter);
}
}
rocksdb::WriteBatch batch;
ScopeRecordLock l(lock_mgr_, key);
uint64_t version = 0;
std::string meta_value;
BaseMetaKey base_meta_key(key);
Status s = db_->Get(default_read_options_, handles_[kMetaCF], base_meta_key.Encode(), &meta_value);
char meta_value_buf[4] = {0};
if (s.ok() && !ExpectedMetaValue(DataType::kPKHashes, meta_value)) {
if (ExpectedStale(meta_value)) {
s = Status::NotFound();
} else {
return Status::InvalidArgument("WRONGTYPE, key: " + key.ToString() +
", expect type: " + DataTypeStrings[static_cast<int>(DataType::kPKHashes)] +
", get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]);
}
}
if (s.ok()) {
ParsedHashesMetaValue parsed_hashes_meta_value(&meta_value);
if (parsed_hashes_meta_value.IsStale() || parsed_hashes_meta_value.Count() == 0) {
version = parsed_hashes_meta_value.InitialMetaValue();
if (!parsed_hashes_meta_value.check_set_count(static_cast<int32_t>(filtered_fvs.size()))) {
return Status::InvalidArgument("hash size overflow");
}
parsed_hashes_meta_value.SetCount(static_cast<int32_t>(filtered_fvs.size()));
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value);
for (const auto& fv : filtered_fvs) {
HashesDataKey hashes_data_key(key, version, fv.field);
PKHashDataValue inter_value(fv.value);
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), inter_value.Encode());
}
} else {
int32_t count = 0;
std::string data_value;
version = parsed_hashes_meta_value.Version();
for (const auto& fv : filtered_fvs) {
HashesDataKey hashes_data_key(key, version, fv.field);
PKHashDataValue inter_value(fv.value);
s = db_->Get(default_read_options_, handles_[kPKHashDataCF], hashes_data_key.Encode(), &data_value);
if (s.ok()) {
statistic++;
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), inter_value.Encode());
} else if (s.IsNotFound()) {
count++;
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), inter_value.Encode());
} else {
return s;
}
}
if (!parsed_hashes_meta_value.CheckModifyCount(count)) {
return Status::InvalidArgument("hash size overflow");
}
parsed_hashes_meta_value.ModifyCount(count);
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value);
}
} else if (s.IsNotFound()) {
EncodeFixed32(meta_value_buf, filtered_fvs.size());
HashesMetaValue hashes_meta_value(DataType::kPKHashes, Slice(meta_value_buf, 4));
version = hashes_meta_value.UpdateVersion();
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), hashes_meta_value.Encode());
for (const auto& fv : filtered_fvs) {
HashesDataKey hashes_data_key(key, version, fv.field);
PKHashDataValue inter_value(fv.value);
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), inter_value.Encode());
}
}
s = db_->Write(default_write_options_, &batch);
UpdateSpecificKeyStatistics(DataType::kPKHashes, key.ToString(), statistic);
return s;
}
🧰 Tools
🪛 cppcheck

[performance] 763-763: Searching before insertion is not necessary.

(stlFindInsert)

Comment on lines 456 to 563
Status Redis::PKHSetex(const Slice& key, const Slice& field, const Slice& value, int32_t ttl) {
if (ttl <= 0) {
return Status::InvalidArgument("invalid expire time");
}

rocksdb::WriteBatch batch;
ScopeRecordLock l(lock_mgr_, key);

int32_t version = 0;
std::string meta_value;
uint32_t statistic = 0;

BaseMetaKey base_meta_key(key);
Status s = db_->Get(default_read_options_, handles_[kMetaCF], base_meta_key.Encode(), &meta_value);
char meta_value_buf[4] = {0};
// 1. 判断类型是否匹配和key是否过期。
if (s.ok() && !ExpectedMetaValue(DataType::kPKHashes, meta_value)) {
if (ExpectedStale(meta_value)) {
s = Status::NotFound();
} else {
return Status::InvalidArgument("WRONGTYPE, key: " + key.ToString() +
", expect type: " + DataTypeStrings[static_cast<int>(DataType::kPKHashes)] +
", get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]);
}
}

if (s.ok()) {
ParsedHashesMetaValue parsed_hashes_meta_value(&meta_value);
if (parsed_hashes_meta_value.Count() == 0 || parsed_hashes_meta_value.IsStale()) {
version = parsed_hashes_meta_value.InitialMetaValue();
parsed_hashes_meta_value.SetCount(1);
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value);
HashesDataKey data_key(key, version, field);
PKHashDataValue ehashes_value(value);
ehashes_value.SetRelativeTimestamp(ttl);
batch.Put(handles_[kPKHashDataCF], data_key.Encode(), ehashes_value.Encode());
// *res = 1; // TODO: add return value
} else {
version = parsed_hashes_meta_value.Version();
std::string data_value;
HashesDataKey hashes_data_key(key, version, field);
s = db_->Get(default_read_options_, handles_[kPKHashDataCF], hashes_data_key.Encode(), &data_value);
if (s.ok()) {
// *res = 0 ;// TODO: add return value
// 判断data_value 和 value 是否相等。
if (data_value == value.ToString()) {
// TODO(DDD): 更新ttl
return Status::OK();
} else {
PKHashDataValue ehashes_value(value);
ehashes_value.SetRelativeTimestamp(ttl);
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode());
statistic++;
}

} else if (s.IsNotFound()) {
if (!parsed_hashes_meta_value.CheckModifyCount(1)) {
return Status::InvalidArgument("hash size overflow");
}
parsed_hashes_meta_value.ModifyCount(1);
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value);
PKHashDataValue ehashes_value(value);
ehashes_value.SetRelativeTimestamp(ttl);
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode());
// *res = 1; // TODO: add return value
} else {
return s;
}
}
} else if (s.IsNotFound()) {
// char str[4];
EncodeFixed32(meta_value_buf, 1);
HashesMetaValue hashes_meta_value(DataType::kPKHashes, Slice(meta_value_buf, 4));
version = hashes_meta_value.UpdateVersion();
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), hashes_meta_value.Encode());
HashesDataKey data_key(key, version, field);
PKHashDataValue ehashes_value(value);
ehashes_value.SetRelativeTimestamp(ttl);
batch.Put(handles_[kPKHashDataCF], data_key.Encode(), ehashes_value.Encode());
} else {
return s;
}

return db_->Write(default_write_options_, &batch);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Address TODO comments and potential TTL issue

The PKHSetex function is generally well-implemented, handling various scenarios correctly. However, there are a few issues to address:

  1. There are TODO comments that need to be resolved:

    • Line 492: // *res = 1; // TODO: add return value
    • Line 499: // *res = 0 ;// TODO: add return value
    • Line 502: // TODO(DDD): 更新ttl
  2. There's a potential issue with TTL handling when updating an existing field. When the new value is equal to the existing value (lines 501-503), the function returns early without updating the TTL.

Consider addressing these issues:

  1. Implement the TODO items, adding a return value mechanism if needed.

  2. Update the TTL even when the value remains the same:

 if (data_value == value.ToString()) {
-  // TODO(DDD): 更新ttl
-  return Status::OK();
+  PKHashDataValue ehashes_value(value);
+  ehashes_value.SetRelativeTimestamp(ttl);
+  batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode());
+  statistic++;
 } else {
   PKHashDataValue ehashes_value(value);
   ehashes_value.SetRelativeTimestamp(ttl);
   batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode());
   statistic++;
 }

These changes will ensure that the TTL is always updated and improve the overall functionality of the method.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Status Redis::PKHSetex(const Slice& key, const Slice& field, const Slice& value, int32_t ttl) {
if (ttl <= 0) {
return Status::InvalidArgument("invalid expire time");
}
rocksdb::WriteBatch batch;
ScopeRecordLock l(lock_mgr_, key);
int32_t version = 0;
std::string meta_value;
uint32_t statistic = 0;
BaseMetaKey base_meta_key(key);
Status s = db_->Get(default_read_options_, handles_[kMetaCF], base_meta_key.Encode(), &meta_value);
char meta_value_buf[4] = {0};
// 1. 判断类型是否匹配和key是否过期。
if (s.ok() && !ExpectedMetaValue(DataType::kPKHashes, meta_value)) {
if (ExpectedStale(meta_value)) {
s = Status::NotFound();
} else {
return Status::InvalidArgument("WRONGTYPE, key: " + key.ToString() +
", expect type: " + DataTypeStrings[static_cast<int>(DataType::kPKHashes)] +
", get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]);
}
}
if (s.ok()) {
ParsedHashesMetaValue parsed_hashes_meta_value(&meta_value);
if (parsed_hashes_meta_value.Count() == 0 || parsed_hashes_meta_value.IsStale()) {
version = parsed_hashes_meta_value.InitialMetaValue();
parsed_hashes_meta_value.SetCount(1);
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value);
HashesDataKey data_key(key, version, field);
PKHashDataValue ehashes_value(value);
ehashes_value.SetRelativeTimestamp(ttl);
batch.Put(handles_[kPKHashDataCF], data_key.Encode(), ehashes_value.Encode());
// *res = 1; // TODO: add return value
} else {
version = parsed_hashes_meta_value.Version();
std::string data_value;
HashesDataKey hashes_data_key(key, version, field);
s = db_->Get(default_read_options_, handles_[kPKHashDataCF], hashes_data_key.Encode(), &data_value);
if (s.ok()) {
// *res = 0 ;// TODO: add return value
// 判断data_value 和 value 是否相等。
if (data_value == value.ToString()) {
// TODO(DDD): 更新ttl
return Status::OK();
} else {
PKHashDataValue ehashes_value(value);
ehashes_value.SetRelativeTimestamp(ttl);
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode());
statistic++;
}
} else if (s.IsNotFound()) {
if (!parsed_hashes_meta_value.CheckModifyCount(1)) {
return Status::InvalidArgument("hash size overflow");
}
parsed_hashes_meta_value.ModifyCount(1);
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value);
PKHashDataValue ehashes_value(value);
ehashes_value.SetRelativeTimestamp(ttl);
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode());
// *res = 1; // TODO: add return value
} else {
return s;
}
}
} else if (s.IsNotFound()) {
// char str[4];
EncodeFixed32(meta_value_buf, 1);
HashesMetaValue hashes_meta_value(DataType::kPKHashes, Slice(meta_value_buf, 4));
version = hashes_meta_value.UpdateVersion();
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), hashes_meta_value.Encode());
HashesDataKey data_key(key, version, field);
PKHashDataValue ehashes_value(value);
ehashes_value.SetRelativeTimestamp(ttl);
batch.Put(handles_[kPKHashDataCF], data_key.Encode(), ehashes_value.Encode());
} else {
return s;
}
return db_->Write(default_write_options_, &batch);
}
Status Redis::PKHSetex(const Slice& key, const Slice& field, const Slice& value, int32_t ttl) {
if (ttl <= 0) {
return Status::InvalidArgument("invalid expire time");
}
rocksdb::WriteBatch batch;
ScopeRecordLock l(lock_mgr_, key);
int32_t version = 0;
std::string meta_value;
uint32_t statistic = 0;
BaseMetaKey base_meta_key(key);
Status s = db_->Get(default_read_options_, handles_[kMetaCF], base_meta_key.Encode(), &meta_value);
char meta_value_buf[4] = {0};
// 1. 判断类型是否匹配和key是否过期。
if (s.ok() && !ExpectedMetaValue(DataType::kPKHashes, meta_value)) {
if (ExpectedStale(meta_value)) {
s = Status::NotFound();
} else {
return Status::InvalidArgument("WRONGTYPE, key: " + key.ToString() +
", expect type: " + DataTypeStrings[static_cast<int>(DataType::kPKHashes)] +
", get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]);
}
}
if (s.ok()) {
ParsedHashesMetaValue parsed_hashes_meta_value(&meta_value);
if (parsed_hashes_meta_value.Count() == 0 || parsed_hashes_meta_value.IsStale()) {
version = parsed_hashes_meta_value.InitialMetaValue();
parsed_hashes_meta_value.SetCount(1);
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value);
HashesDataKey data_key(key, version, field);
PKHashDataValue ehashes_value(value);
ehashes_value.SetRelativeTimestamp(ttl);
batch.Put(handles_[kPKHashDataCF], data_key.Encode(), ehashes_value.Encode());
// *res = 1; // TODO: add return value
} else {
version = parsed_hashes_meta_value.Version();
std::string data_value;
HashesDataKey hashes_data_key(key, version, field);
s = db_->Get(default_read_options_, handles_[kPKHashDataCF], hashes_data_key.Encode(), &data_value);
if (s.ok()) {
// *res = 0 ;// TODO: add return value
// 判断data_value 和 value 是否相等。
if (data_value == value.ToString()) {
PKHashDataValue ehashes_value(value);
ehashes_value.SetRelativeTimestamp(ttl);
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode());
statistic++;
} else {
PKHashDataValue ehashes_value(value);
ehashes_value.SetRelativeTimestamp(ttl);
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode());
statistic++;
}
} else if (s.IsNotFound()) {
if (!parsed_hashes_meta_value.CheckModifyCount(1)) {
return Status::InvalidArgument("hash size overflow");
}
parsed_hashes_meta_value.ModifyCount(1);
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value);
PKHashDataValue ehashes_value(value);
ehashes_value.SetRelativeTimestamp(ttl);
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode());
// *res = 1; // TODO: add return value
} else {
return s;
}
}
} else if (s.IsNotFound()) {
// char str[4];
EncodeFixed32(meta_value_buf, 1);
HashesMetaValue hashes_meta_value(DataType::kPKHashes, Slice(meta_value_buf, 4));
version = hashes_meta_value.UpdateVersion();
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), hashes_meta_value.Encode());
HashesDataKey data_key(key, version, field);
PKHashDataValue ehashes_value(value);
ehashes_value.SetRelativeTimestamp(ttl);
batch.Put(handles_[kPKHashDataCF], data_key.Encode(), ehashes_value.Encode());
} else {
return s;
}
return db_->Write(default_write_options_, &batch);
}

Comment on lines 840 to 963
uint32_t statistic = 0;
std::unordered_set<std::string> fields;
std::vector<FieldValueTTL> filtered_fvs;
for (auto iter = fvts.rbegin(); iter != fvts.rend(); ++iter) {
std::string field = iter->field;
if (fields.find(field) == fields.end()) {
fields.insert(field);
filtered_fvs.push_back(*iter);
}
}

rocksdb::WriteBatch batch;
ScopeRecordLock l(lock_mgr_, key);

int32_t version = 0;
std::string meta_value;

BaseMetaKey base_meta_key(key);
Status s = db_->Get(default_read_options_, handles_[kMetaCF], base_meta_key.Encode(), &meta_value);
char meta_value_buf[4] = {0};
if (s.ok() && !ExpectedMetaValue(DataType::kPKHashes, meta_value)) {
if (ExpectedStale(meta_value)) {
s = Status::NotFound();
} else {
return Status::InvalidArgument("WRONGTYPE, key: " + key.ToString() +
", expect type: " + DataTypeStrings[static_cast<int>(DataType::kPKHashes)] +
", get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]);
}
}

if (s.ok()) {
ParsedHashesMetaValue parsed_hashes_meta_value(&meta_value);
if (parsed_hashes_meta_value.Count() == 0 || parsed_hashes_meta_value.IsStale()) {
version = parsed_hashes_meta_value.InitialMetaValue();
if (!parsed_hashes_meta_value.check_set_count(static_cast<int32_t>(filtered_fvs.size()))) {
return Status::InvalidArgument("hash size overflow");
}
parsed_hashes_meta_value.SetCount(static_cast<int32_t>(filtered_fvs.size()));
// parsed_hashes_meta_value.set_timestamp(0);
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value);
for (const auto& fv : filtered_fvs) {
HashesDataKey hashes_data_key(key, version, fv.field);
PKHashDataValue ehashes_value(fv.value);
if (fv.ttl > 0) {
ehashes_value.SetRelativeTimestamp(fv.ttl);
}
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode());
}
} else {
int32_t count = 0;
std::string data_value;
version = parsed_hashes_meta_value.Version();
for (const auto& fv : filtered_fvs) {
HashesDataKey hashes_data_key(key, version, fv.field);
s = db_->Get(default_read_options_, handles_[kPKHashDataCF], hashes_data_key.Encode(), &data_value);
if (s.ok()) {
statistic++;
PKHashDataValue ehashes_value(fv.value);
if (fv.ttl > 0) {
ehashes_value.SetRelativeTimestamp(fv.ttl);
}
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode());
} else if (s.IsNotFound()) {
count++;
PKHashDataValue ehashes_value(fv.value);
if (fv.ttl > 0) {
ehashes_value.SetRelativeTimestamp(fv.ttl);
}
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode());
} else {
return s;
}
}

if (!parsed_hashes_meta_value.CheckModifyCount(count)) {
return Status::InvalidArgument("hash size overflow");
}

parsed_hashes_meta_value.ModifyCount(count);
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value);
}
} else if (s.IsNotFound()) {
// char str[4];
EncodeFixed32(meta_value_buf, filtered_fvs.size());
HashesMetaValue hashes_meta_value(DataType::kPKHashes, Slice(meta_value_buf, 4));
version = hashes_meta_value.UpdateVersion();
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), hashes_meta_value.Encode());
for (const auto& fv : filtered_fvs) {
HashesDataKey hashes_data_key(key, version, fv.field);
PKHashDataValue ehashes_value(fv.value);
if (fv.ttl > 0) {
ehashes_value.SetRelativeTimestamp(fv.ttl);
}
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode());
}
}
s = db_->Write(default_write_options_, &batch);
UpdateSpecificKeyStatistics(DataType::kPKHashes, key.ToString(), statistic);
return s;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Optimize field filtering process

The PKHMSetex function is well-implemented, correctly handling the setting of multiple fields with TTLs in a hash. However, there's an opportunity to optimize the field filtering process:

Consider using std::unordered_set::insert directly instead of searching before insertion. This can be more efficient, especially for larger input sizes. Here's a suggested optimization:

 std::vector<FieldValueTTL> filtered_fvs;
 std::unordered_set<std::string> fields;
-for (auto iter = fvts.rbegin(); iter != fvts.rend(); ++iter) {
-  std::string field = iter->field;
-  if (fields.find(field) == fields.end()) {
-    fields.insert(field);
-    filtered_fvs.push_back(*iter);
-  }
+for (auto iter = fvts.rbegin(); iter != fvts.rend(); ++iter) {
+  auto result = fields.insert(iter->field);
+  if (result.second) {
+    filtered_fvs.push_back(*iter);
+  }
 }

This change eliminates the need for an explicit search before insertion, potentially improving performance.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Status Redis::PKHMSetex(const Slice& key, const std::vector<FieldValueTTL>& fvts) {
uint32_t statistic = 0;
std::unordered_set<std::string> fields;
std::vector<FieldValueTTL> filtered_fvs;
for (auto iter = fvts.rbegin(); iter != fvts.rend(); ++iter) {
std::string field = iter->field;
if (fields.find(field) == fields.end()) {
fields.insert(field);
filtered_fvs.push_back(*iter);
}
}
rocksdb::WriteBatch batch;
ScopeRecordLock l(lock_mgr_, key);
int32_t version = 0;
std::string meta_value;
BaseMetaKey base_meta_key(key);
Status s = db_->Get(default_read_options_, handles_[kMetaCF], base_meta_key.Encode(), &meta_value);
char meta_value_buf[4] = {0};
if (s.ok() && !ExpectedMetaValue(DataType::kPKHashes, meta_value)) {
if (ExpectedStale(meta_value)) {
s = Status::NotFound();
} else {
return Status::InvalidArgument("WRONGTYPE, key: " + key.ToString() +
", expect type: " + DataTypeStrings[static_cast<int>(DataType::kPKHashes)] +
", get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]);
}
}
if (s.ok()) {
ParsedHashesMetaValue parsed_hashes_meta_value(&meta_value);
if (parsed_hashes_meta_value.Count() == 0 || parsed_hashes_meta_value.IsStale()) {
version = parsed_hashes_meta_value.InitialMetaValue();
if (!parsed_hashes_meta_value.check_set_count(static_cast<int32_t>(filtered_fvs.size()))) {
return Status::InvalidArgument("hash size overflow");
}
parsed_hashes_meta_value.SetCount(static_cast<int32_t>(filtered_fvs.size()));
// parsed_hashes_meta_value.set_timestamp(0);
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value);
for (const auto& fv : filtered_fvs) {
HashesDataKey hashes_data_key(key, version, fv.field);
PKHashDataValue ehashes_value(fv.value);
if (fv.ttl > 0) {
ehashes_value.SetRelativeTimestamp(fv.ttl);
}
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode());
}
} else {
int32_t count = 0;
std::string data_value;
version = parsed_hashes_meta_value.Version();
for (const auto& fv : filtered_fvs) {
HashesDataKey hashes_data_key(key, version, fv.field);
s = db_->Get(default_read_options_, handles_[kPKHashDataCF], hashes_data_key.Encode(), &data_value);
if (s.ok()) {
statistic++;
PKHashDataValue ehashes_value(fv.value);
if (fv.ttl > 0) {
ehashes_value.SetRelativeTimestamp(fv.ttl);
}
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode());
} else if (s.IsNotFound()) {
count++;
PKHashDataValue ehashes_value(fv.value);
if (fv.ttl > 0) {
ehashes_value.SetRelativeTimestamp(fv.ttl);
}
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode());
} else {
return s;
}
}
if (!parsed_hashes_meta_value.CheckModifyCount(count)) {
return Status::InvalidArgument("hash size overflow");
}
parsed_hashes_meta_value.ModifyCount(count);
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value);
}
} else if (s.IsNotFound()) {
// char str[4];
EncodeFixed32(meta_value_buf, filtered_fvs.size());
HashesMetaValue hashes_meta_value(DataType::kPKHashes, Slice(meta_value_buf, 4));
version = hashes_meta_value.UpdateVersion();
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), hashes_meta_value.Encode());
for (const auto& fv : filtered_fvs) {
HashesDataKey hashes_data_key(key, version, fv.field);
PKHashDataValue ehashes_value(fv.value);
if (fv.ttl > 0) {
ehashes_value.SetRelativeTimestamp(fv.ttl);
}
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode());
}
}
s = db_->Write(default_write_options_, &batch);
UpdateSpecificKeyStatistics(DataType::kPKHashes, key.ToString(), statistic);
return s;
}
Status Redis::PKHMSetex(const Slice& key, const std::vector<FieldValueTTL>& fvts) {
uint32_t statistic = 0;
std::unordered_set<std::string> fields;
std::vector<FieldValueTTL> filtered_fvs;
for (auto iter = fvts.rbegin(); iter != fvts.rend(); ++iter) {
auto result = fields.insert(iter->field);
if (result.second) {
filtered_fvs.push_back(*iter);
}
}
rocksdb::WriteBatch batch;
ScopeRecordLock l(lock_mgr_, key);
int32_t version = 0;
std::string meta_value;
BaseMetaKey base_meta_key(key);
Status s = db_->Get(default_read_options_, handles_[kMetaCF], base_meta_key.Encode(), &meta_value);
char meta_value_buf[4] = {0};
if (s.ok() && !ExpectedMetaValue(DataType::kPKHashes, meta_value)) {
if (ExpectedStale(meta_value)) {
s = Status::NotFound();
} else {
return Status::InvalidArgument("WRONGTYPE, key: " + key.ToString() +
", expect type: " + DataTypeStrings[static_cast<int>(DataType::kPKHashes)] +
", get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]);
}
}
if (s.ok()) {
ParsedHashesMetaValue parsed_hashes_meta_value(&meta_value);
if (parsed_hashes_meta_value.Count() == 0 || parsed_hashes_meta_value.IsStale()) {
version = parsed_hashes_meta_value.InitialMetaValue();
if (!parsed_hashes_meta_value.check_set_count(static_cast<int32_t>(filtered_fvs.size()))) {
return Status::InvalidArgument("hash size overflow");
}
parsed_hashes_meta_value.SetCount(static_cast<int32_t>(filtered_fvs.size()));
// parsed_hashes_meta_value.set_timestamp(0);
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value);
for (const auto& fv : filtered_fvs) {
HashesDataKey hashes_data_key(key, version, fv.field);
PKHashDataValue ehashes_value(fv.value);
if (fv.ttl > 0) {
ehashes_value.SetRelativeTimestamp(fv.ttl);
}
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode());
}
} else {
int32_t count = 0;
std::string data_value;
version = parsed_hashes_meta_value.Version();
for (const auto& fv : filtered_fvs) {
HashesDataKey hashes_data_key(key, version, fv.field);
s = db_->Get(default_read_options_, handles_[kPKHashDataCF], hashes_data_key.Encode(), &data_value);
if (s.ok()) {
statistic++;
PKHashDataValue ehashes_value(fv.value);
if (fv.ttl > 0) {
ehashes_value.SetRelativeTimestamp(fv.ttl);
}
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode());
} else if (s.IsNotFound()) {
count++;
PKHashDataValue ehashes_value(fv.value);
if (fv.ttl > 0) {
ehashes_value.SetRelativeTimestamp(fv.ttl);
}
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode());
} else {
return s;
}
}
if (!parsed_hashes_meta_value.CheckModifyCount(count)) {
return Status::InvalidArgument("hash size overflow");
}
parsed_hashes_meta_value.ModifyCount(count);
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value);
}
} else if (s.IsNotFound()) {
// char str[4];
EncodeFixed32(meta_value_buf, filtered_fvs.size());
HashesMetaValue hashes_meta_value(DataType::kPKHashes, Slice(meta_value_buf, 4));
version = hashes_meta_value.UpdateVersion();
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), hashes_meta_value.Encode());
for (const auto& fv : filtered_fvs) {
HashesDataKey hashes_data_key(key, version, fv.field);
PKHashDataValue ehashes_value(fv.value);
if (fv.ttl > 0) {
ehashes_value.SetRelativeTimestamp(fv.ttl);
}
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode());
}
}
s = db_->Write(default_write_options_, &batch);
UpdateSpecificKeyStatistics(DataType::kPKHashes, key.ToString(), statistic);
return s;
}
🧰 Tools
🪛 cppcheck

[performance] 847-847: Searching before insertion is not necessary.

(stlFindInsert)

Comment on lines +455 to +454
return Status::InvalidArgument("WRONGTYPE, key: " + key.ToString() + ", expect type: " +
DataTypeStrings[static_cast<int>(DataType::kStrings)] + ", get type: " +
DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider using a formatting library for error messages.

As suggested earlier, using a formatting library like fmt or std::format (C++20) would improve performance and maintainability of error messages throughout the codebase.

Example using fmt:

return Status::InvalidArgument(fmt::format("WRONGTYPE, key: {}, expect type: {}, get type: {}",
                               key.ToString(),
                               DataTypeStrings[static_cast<int>(DataType::kStrings)],
                               DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]));

Comment on lines +492 to +491
return Status::InvalidArgument("WRONGTYPE, key: " + key.ToString() +
", expect type: " + DataTypeStrings[static_cast<int>(DataType::kStrings)] +
", get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(value))]);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider using a formatting library for error messages.

As suggested earlier, using a formatting library like fmt or std::format (C++20) would improve performance and maintainability of error messages throughout the codebase.

Example using fmt:

return Status::InvalidArgument(fmt::format("WRONGTYPE, key: {}, expect type: {}, get type: {}",
                               key.ToString(),
                               DataTypeStrings[static_cast<int>(DataType::kStrings)],
                               DataTypeStrings[static_cast<int>(GetMetaValueType(value))]));

Comment on lines +536 to +535
return Status::InvalidArgument("WRONGTYPE, key: " + key.ToString() +
", expect type: " + DataTypeStrings[static_cast<int>(DataType::kStrings)] +
", get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider using a formatting library for error messages.

As suggested earlier, using a formatting library like fmt or std::format (C++20) would improve performance and maintainability of error messages throughout the codebase.

Example using fmt:

return Status::InvalidArgument(fmt::format("WRONGTYPE, key: {}, expect type: {}, get type: {}",
                               key.ToString(),
                               DataTypeStrings[static_cast<int>(DataType::kStrings)],
                               DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]));

};

} // namespace storage
#endif // SRC_BASE_VALUE_FORMAT_H_
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Inconsistent header guard ending comment.

The #endif at the end of the file does not match the header guard defined at the beginning. The mismatch can lead to confusion and potential issues with conditional compilation.

  • Header guard starts with:

    #ifndef SRC_PKHASH_DATA_VALUE_FORMAT_H_
    #define SRC_PKHASH_DATA_VALUE_FORMAT_H_
  • Header guard ends with:

    #endif  // SRC_BASE_VALUE_FORMAT_H_

Apply this diff to correct the header guard ending comment:

-#endif  // SRC_BASE_VALUE_FORMAT_H_
+#endif  // SRC_PKHASH_DATA_VALUE_FORMAT_H_

Comment on lines 109 to 1084
// fields.clear();
// std::map<storage::DataType, rocksdb::Status> type_status;
// db.Expire("B_HKEYS_KEY", 1);
// ASSERT_TRUE(type_status[storage::DataType::kHashes].ok());
// std::this_thread::sleep_for(std::chrono::milliseconds(2000));
// s = db.PKHKeys("B_HKEYS_KEY", &fields);
// ASSERT_TRUE(s.IsNotFound());
// ASSERT_EQ(fields.size(), 0);

// // PKHKeys not exist hash table
// fields.clear();
// s = db.PKHKeys("HKEYS_NOT_EXIST_KEY", &fields);
// ASSERT_TRUE(s.IsNotFound());
// ASSERT_EQ(fields.size(), 0);
// }

// // PKHVals
// TEST_F(PKHashesTest, HVals) {
// int32_t ret = 0;
// std::vector<storage::FieldValue> mid_fvs_in;
// mid_fvs_in.push_back({"MID_TEST_FIELD1", "MID_TEST_VALUE1"});
// mid_fvs_in.push_back({"MID_TEST_FIELD2", "MID_TEST_VALUE2"});
// mid_fvs_in.push_back({"MID_TEST_FIELD3", "MID_TEST_VALUE3"});
// s = db.PKHMSet("B_HVALS_KEY", mid_fvs_in);
// ASSERT_TRUE(s.ok());

// std::vector<std::string> values;
// s = db.PKHVals("B_HVALS_KEY", &values);
// ASSERT_TRUE(s.ok());
// ASSERT_EQ(values.size(), 3);
// ASSERT_EQ(values[0], "MID_TEST_VALUE1");
// ASSERT_EQ(values[1], "MID_TEST_VALUE2");
// ASSERT_EQ(values[2], "MID_TEST_VALUE3");

// // Insert some kv who's position above "mid kv"
// std::vector<storage::FieldValue> pre_fvs_in;
// pre_fvs_in.push_back({"PRE_TEST_FIELD1", "PRE_TEST_VALUE1"});
// pre_fvs_in.push_back({"PRE_TEST_FIELD2", "PRE_TEST_VALUE2"});
// pre_fvs_in.push_back({"PRE_TEST_FIELD3", "PRE_TEST_VALUE3"});
// s = db.PKHMSet("A_HVALS_KEY", pre_fvs_in);
// ASSERT_TRUE(s.ok());
// values.clear();
// s = db.PKHVals("B_HVALS_KEY", &values);
// ASSERT_TRUE(s.ok());
// ASSERT_EQ(values.size(), 3);
// ASSERT_EQ(values[0], "MID_TEST_VALUE1");
// ASSERT_EQ(values[1], "MID_TEST_VALUE2");
// ASSERT_EQ(values[2], "MID_TEST_VALUE3");

// // Insert some kv who's position below "mid kv"
// std::vector<storage::FieldValue> suf_fvs_in;
// suf_fvs_in.push_back({"SUF_TEST_FIELD1", "SUF_TEST_VALUE1"});
// suf_fvs_in.push_back({"SUF_TEST_FIELD2", "SUF_TEST_VALUE2"});
// suf_fvs_in.push_back({"SUF_TEST_FIELD3", "SUF_TEST_VALUE3"});
// s = db.PKHMSet("C_HVALS_KEY", suf_fvs_in);
// ASSERT_TRUE(s.ok());
// values.clear();
// s = db.PKHVals("B_HVALS_KEY", &values);
// ASSERT_TRUE(s.ok());
// ASSERT_EQ(values.size(), 3);
// ASSERT_EQ(values[0], "MID_TEST_VALUE1");
// ASSERT_EQ(values[1], "MID_TEST_VALUE2");
// ASSERT_EQ(values[2], "MID_TEST_VALUE3");

// // PKHVals timeout hash table
// values.clear();
// std::map<storage::DataType, rocksdb::Status> type_status;
// db.Expire("B_HVALS_KEY", 1);
// ASSERT_TRUE(type_status[storage::DataType::kHashes].ok());
// std::this_thread::sleep_for(std::chrono::milliseconds(2000));
// s = db.PKHVals("B_HVALS_KEY", &values);
// ASSERT_TRUE(s.IsNotFound());
// ASSERT_EQ(values.size(), 0);

// // PKHVals not exist hash table
// values.clear();
// s = db.PKHVals("HVALS_NOT_EXIST_KEY", &values);
// ASSERT_TRUE(s.IsNotFound());
// ASSERT_EQ(values.size(), 0);
// }

// // PKHStrlen
// TEST_F(PKHashesTest, HStrlenTest) {
// int32_t ret = 0;
// int32_t len = 0;
// s = db.PKHSet("HSTRLEN_KEY", "HSTRLEN_TEST_FIELD", "HSTRLEN_TEST_VALUE", &ret);
// ASSERT_TRUE(s.ok());
// ASSERT_EQ(ret, 1);

// s = db.PKHStrlen("HSTRLEN_KEY", "HSTRLEN_TEST_FIELD", &len);
// ASSERT_TRUE(s.ok());
// ASSERT_EQ(len, 18);

// // If the key or the field do not exist, 0 is returned
// s = db.PKHStrlen("HSTRLEN_KEY", "HSTRLEN_NOT_EXIST_FIELD", &len);
// ASSERT_TRUE(s.IsNotFound());
// ASSERT_EQ(len, 0);
// }

// PKHScan
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Uncomment or remove inactive test cases

A large block of test cases from lines 109 to 836 is commented out. If these tests are intended to be part of the test suite, consider uncommenting them. If they are obsolete or not ready for inclusion, consider removing them or adding explanations for future reference.

@bigdaronlee163 bigdaronlee163 force-pushed the ldx-ehash-code-4 branch 3 times, most recently from 87eee4b to 378f5a3 Compare September 28, 2024 06:36
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Inline review comments failed to post

Actionable comments posted: 30

🧹 Outside diff range and nitpick comments (20)
src/pika_pkhash.cc (1)

182-184: Address TODO comment and adjust fields assignment

There's a TODO comment that needs to be addressed. Additionally, the fields_ assignment needs to be adjusted due to the removal of unnecessary iterator increments.

Consider implementing the TODO and modifying the assignment:

-  //  TODO(DDD) fields_.assign(argv_.begin() + 4, argv_.end());
-  fields_.assign(iter, argv_.end());
+  fields_.assign(argv_.begin() + 4, argv_.end());

Would you like me to create a GitHub issue to track this TODO?

src/pika_command.cc (2)

Line range hint 1110-1137: Suggest performance testing and documentation for cache control changes

The IsNeedCacheDo method has been updated with more granular control over caching for different data types. While this is likely an improvement in cache management, please ensure the following:

  1. Perform performance testing to verify that these additional checks don't introduce significant overhead, especially in scenarios with mixed data types.
  2. Update the documentation to reflect these new caching controls, including how to configure caching for each data type and the impact of the IsCacheDisabledTemporarily flag.
  3. Consider adding a comment in the code explaining the overall caching strategy and how these checks fit into it.

Consider refactoring this method to use a lookup table or switch statement if the number of data types continues to grow, to improve readability and maintainability.


Line range hint 1-1210: Overall review summary and suggestions

The changes to src/pika_command.cc primarily focus on adding PKHash commands and enhancing caching functionality. While the implementation generally looks good, here are some final suggestions:

  1. Ensure consistency in command flags across all new and modified commands.
  2. Resolve all TODO comments and translate any remaining non-English comments to English.
  3. Add or update documentation for the new PKHash commands and caching behavior.
  4. Consider adding more inline comments to explain complex logic, especially in caching-related methods.
  5. Perform thorough testing of the new PKHash commands and caching functionality, including edge cases and performance testing.
  6. Review error handling and edge cases for the new functionality to ensure robustness.
  7. Update any relevant external documentation or README files to reflect these new features and changes.

As the command table continues to grow, consider if there's an opportunity to refactor the InitCmdTable function to make it more maintainable, possibly by grouping commands by type or using a data-driven approach to initialize commands.

src/storage/src/redis_pkhashes.cc (1)

853-853: Remove outdated TODO comment in PKHMSetex.

The TODO comment indicates adding expiration times to fields, but the PKHMSetex function already implements this functionality. Removing the comment will keep the code clean and up-to-date.

Apply this diff to remove the unnecessary comment:

-// TODO(DDD) field 带有过期时间。
src/storage/tests/pkhashes_test.cc (1)

575-575: Translate comments to English for consistency

The comment // 从这里开始有问题。 is in Chinese. To maintain code readability and inclusivity for all team members, please translate comments to English.

src/storage/src/storage.cc (15)

Line range hint 191-203: Refactor the loop to improve readability and maintainability.

The loop body contains repetitive code for handling different cases based on the status. Consider extracting the common logic into a separate function or using a more expressive construct like a switch statement.

Here's a suggested refactoring:

for (const auto& key : keys) {
  auto& inst = GetDBInstance(key);
  std::string value;
  Status s = inst->MGet(key, &value);
  if (s.ok()) {
    vss->push_back({value, Status::OK()});
  } else {
    vss->push_back({std::string(), s});
    if (!s.IsNotFound()) {
      vss->clear();
      return s;
    }
  }
}

Line range hint 210-225: Refactor the loop to improve readability and maintainability.

The loop body contains repetitive code for handling different cases based on the status. Consider extracting the common logic into a separate function or using a more expressive construct like a switch statement.

Here's a suggested refactoring:

for (const auto& key : keys) {
  auto& inst = GetDBInstance(key);
  std::string value;
  int64_t ttl;
  Status s = inst->MGetWithTTL(key, &value, &ttl);
  if (s.ok()) {
    vss->push_back({value, Status::OK(), ttl});
  } else {
    vss->push_back({std::string(), s, ttl});
    if (!s.IsNotFound()) {
      vss->clear();
      return s;
    }
  }
}

278-282: Ensure consistent parameter passing style for output parameters.

The GetrangeWithValue method uses a mix of pointers and references for output parameters. For consistency with other methods, consider using pointers for all output parameters.

Apply this diff to update the parameter types:

-Status Storage::GetrangeWithValue(const Slice& key, int64_t start_offset, int64_t end_offset, std::string* ret,
-                                  std::string& out_new_value, int64_t* ttl) {
+Status Storage::GetrangeWithValue(const Slice& key, int64_t start_offset, int64_t end_offset, std::string* ret,
+                                  std::string* out_new_value, int64_t* ttl) {

492-496: Return std::vector<int64_t> instead of using an output parameter.

To improve code clarity and align with modern C++ practices, consider returning the timestamps vector directly instead of using an output parameter.

Apply this diff:

-Status Storage::PKHExpiretime(const Slice& key, int32_t numfields, const std::vector<std::string>& fields,
-                              std::vector<int64_t>* timestamps) {
+std::vector<int64_t> Storage::PKHExpiretime(const Slice& key, int32_t numfields, const std::vector<std::string>& fields) {
+  std::vector<int64_t> timestamps;
   auto& inst = GetDBInstance(key);
-  return inst->PKHExpiretime(key, numfields, fields, timestamps);
+  inst->PKHExpiretime(key, numfields, fields, &timestamps);
+  return timestamps;

498-502: Return std::vector<int32_t> instead of using an output parameter.

To improve code clarity and align with modern C++ practices, consider returning the rets vector directly instead of using an output parameter.

Apply this diff:

-Status Storage::PKHPersist(const Slice& key, int32_t numfields, const std::vector<std::string>& fields,
-                           std::vector<int32_t>* rets) {
+std::vector<int32_t> Storage::PKHPersist(const Slice& key, int32_t numfields, const std::vector<std::string>& fields) {
+  std::vector<int32_t> rets;
   auto& inst = GetDBInstance(key);
-  return inst->PKHPersist(key, numfields, fields, rets);
+  inst->PKHPersist(key, numfields, fields, &rets);
+  return rets;

504-508: Return std::vector<int64_t> instead of using an output parameter.

To improve code clarity and align with modern C++ practices, consider returning the ttls vector directly instead of using an output parameter.

Apply this diff:

-Status Storage::PKHTTL(const Slice& key, int32_t numfields, const std::vector<std::string>& fields,
-                       std::vector<int64_t>* ttls) {
+std::vector<int64_t> Storage::PKHTTL(const Slice& key, int32_t numfields, const std::vector<std::string>& fields) {
+  std::vector<int64_t> ttls;
   auto& inst = GetDBInstance(key);
-  return inst->PKHTTL(key, numfields, fields, ttls);
+  inst->PKHTTL(key, numfields, fields, &ttls);
+  return ttls;

510-513: Return std::string instead of using an output parameter.

To improve code clarity and align with modern C++ practices, consider returning the value string directly instead of using an output parameter.

Apply this diff:

-Status Storage::PKHGet(const Slice& key, const Slice& field, std::string* value) {
+std::string Storage::PKHGet(const Slice& key, const Slice& field) {
+  std::string value;
   auto& inst = GetDBInstance(key);
-  return inst->PKHGet(key, field, value);
+  inst->PKHGet(key, field, &value);
+  return value;

515-518: Consider returning the res value directly.

Since the res parameter is an output parameter, consider returning its value directly instead of returning a Status object. This would align with the typical convention of returning the result value for methods that perform a simple operation.

Apply this diff:

-Status Storage::PKHSet(const Slice& key, const Slice& field, const Slice& value, int32_t* res) {
+int32_t Storage::PKHSet(const Slice& key, const Slice& field, const Slice& value) {
+  int32_t res;
   auto& inst = GetDBInstance(key);
-  return inst->PKHSet(key, field, value, res);
+  inst->PKHSet(key, field, value, &res);
+  return res;

525-528: Consider returning a bool value instead of a Status.

Since the PKHExists method checks for the existence of a field in a hash key, it would be more intuitive to return a boolean value indicating the existence rather than a Status object. This would align with the typical convention of returning a boolean for existence checks.

Apply this diff:

-Status Storage::PKHExists(const Slice& key, const Slice& field) {
+bool Storage::PKHExists(const Slice& key, const Slice& field) {
   auto& inst = GetDBInstance(key);
-  return inst->PKHExists(key, field);
+  return inst->PKHExists(key, field).ok();

530-533: Consider returning the ret value directly.

Since the ret parameter is an output parameter, consider returning its value directly instead of returning a Status object. This would align with the typical convention of returning the result value for methods that perform a simple operation.

Apply this diff:

-Status Storage::PKHDel(const Slice& key, const std::vector<std::string>& fields, int32_t* ret) {
+int32_t Storage::PKHDel(const Slice& key, const std::vector<std::string>& fields) {
+  int32_t ret;
   auto& inst = GetDBInstance(key);
-  return inst->PKHDel(key, fields, ret);
+  inst->PKHDel(key, fields, &ret);
+  return ret;

535-538: Consider returning the ret value directly.

Since the ret parameter is an output parameter, consider returning its value directly instead of returning a Status object. This would align with the typical convention of returning the result value for methods that perform a simple operation.

Apply this diff:

-Status Storage::PKHLen(const Slice& key, int32_t* ret) {
+int32_t Storage::PKHLen(const Slice& key) {
+  int32_t ret;
   auto& inst = GetDBInstance(key);
-  return inst->PKHLen(key, ret);
+  inst->PKHLen(key, &ret);
+  return ret;

545-548: Consider returning the len value directly.

Since the len parameter is an output parameter, consider returning its value directly instead of returning a Status object. This would align with the typical convention of returning the result value for methods that perform a simple operation.

Apply this diff:

-Status Storage::PKHStrlen(const Slice& key, const Slice& field, int32_t* len) {
+int32_t Storage::PKHStrlen(const Slice& key, const Slice& field) {
+  int32_t len;
   auto& inst = GetDBInstance(key);
-  return inst->PKHStrlen(key, field, len);
+  inst->PKHStrlen(key, field, &len);
+  return len;

570-573: Return std::vector<std::string> instead of using an output parameter.

To improve code clarity and align with modern C++ practices, consider returning the fields vector directly instead of using an output parameter.

Apply this diff:

-Status Storage::PKHKeys(const Slice& key, std::vector<std::string>* fields) {
+std::vector<std::string> Storage::PKHKeys(const Slice& key) {
+  std::vector<std::string> fields;
   auto& inst = GetDBInstance(key);
-  return inst->PKHKeys(key, fields);
+  inst->PKHKeys(key, &fields);
+  return fields;

575-578: Return std::vector<std::string> instead of using an output parameter.

To improve code clarity and align with modern C++ practices, consider returning the values vector directly instead of using an output parameter.

Apply this diff:

-Status Storage::PKHVals(const Slice& key, std::vector<std::string>* values) {
+std::vector<std::string> Storage::PKHVals(const Slice& key) {
+  std::vector<std::string> values;
   auto& inst = GetDBInstance(key);
-  return inst->PKHVals(key, values);
+  inst->PKHVals(key, &values);
+  return values;

580-583: Return std::vector<FieldValueTTL> instead of using an output parameter.

To improve code clarity and align with modern C++ practices, consider returning the fvts vector directly instead of using an output parameter.

Apply this diff:

-Status Storage::PKHGetall(const Slice& key, std::vector

</blockquote></details>

</blockquote></details>

<details>
<summary>🛑 Comments failed to post (30)</summary><blockquote>

<details>
<summary>src/pika_pkhash.cc (15)</summary><blockquote>

174-178: _:hammer_and_wrench: Refactor suggestion_

**Remove unnecessary iterator increments**

The iterator `iter` is incremented multiple times without being used, which doesn't affect the logic but reduces code clarity.


Consider removing these unnecessary increments:

```diff
-iter++;
-iter++;
-iter++;
-iter++;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

  if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) == 0) {

140-144: 🛠️ Refactor suggestion

Remove unnecessary iterator increments

The iterator iter is incremented multiple times without being used, which doesn't affect the logic but reduces code clarity.

Consider removing these unnecessary increments:

-iter++;
-iter++;
-iter++;
-iter++;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

  if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) == 0) {

105-109: 🛠️ Refactor suggestion

Remove unnecessary iterator increments

The iterator iter is incremented multiple times without being used, which doesn't affect the logic but reduces code clarity.

Consider removing these unnecessary increments:

-iter++;
-iter++;
-iter++;
-iter++;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.



68-73: 🛠️ Refactor suggestion

Remove unnecessary iterator increments

The iterator iter is incremented multiple times without being used, which doesn't affect the logic but reduces code clarity.

Consider removing these unnecessary increments:

-iter++;
-iter++;
-iter++;
-iter++;
-iter++;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.



28-32: 🛠️ Refactor suggestion

Remove unnecessary iterator increments

The iterator iter is incremented multiple times without being used, which doesn't affect the logic but reduces code clarity.

Consider removing these unnecessary increments:

-iter++;
-iter++;
-iter++;
-iter++;
-iter++;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.



79-80: ⚠️ Potential issue

Adjust fields assignment

Due to the removal of unnecessary iterator increments, the fields_ assignment needs to be adjusted to ensure correct parsing.

Consider modifying the assignment to:

-fields_.assign(iter, argv_.end());
+fields_.assign(argv_.begin() + 5, argv_.end());
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

  fields_.assign(argv_.begin() + 5, argv_.end());
}

115-116: ⚠️ Potential issue

Adjust fields assignment

Due to the removal of unnecessary iterator increments, the fields_ assignment needs to be adjusted to ensure correct parsing.

Consider modifying the assignment to:

-fields_.assign(iter, argv_.end());
+fields_.assign(argv_.begin() + 4, argv_.end());
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

  fields_.assign(argv_.begin() + 4, argv_.end());
}

149-150: ⚠️ Potential issue

Adjust fields assignment

Due to the removal of unnecessary iterator increments, the fields_ assignment needs to be adjusted to ensure correct parsing.

Consider modifying the assignment to:

-fields_.assign(iter, argv_.end());
+fields_.assign(argv_.begin() + 4, argv_.end());
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

  fields_.assign(argv_.begin() + 4, argv_.end());
}

63-66: ⚠️ Potential issue

Improve error handling in timestamp parsing

The condition for parsing the timestamp should be adjusted to correctly detect parsing errors.

Consider modifying the condition to:

-if (pstd::string2int(argv_[2].data(), argv_[2].size(), &timestamp_) == 0) {
+if (pstd::string2int(argv_[2].data(), argv_[2].size(), &timestamp_) != 1) {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

  if (pstd::string2int(argv_[2].data(), argv_[2].size(), &timestamp_) != 1) {
    res_.SetRes(CmdRes::kInvalidInt);
    return;
  }

74-77: ⚠️ Potential issue

Improve error handling in numfields parsing

Similar to the timestamp parsing, the condition for parsing numfields_ should be adjusted to correctly detect parsing errors.

Consider modifying the condition to:

-if (pstd::string2int(argv_[4].data(), argv_[4].size(), &numfields_) == 0) {
+if (pstd::string2int(argv_[4].data(), argv_[4].size(), &numfields_) != 1) {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

  if (pstd::string2int(argv_[4].data(), argv_[4].size(), &numfields_) != 1) {
    res_.SetRes(CmdRes::kInvalidInt);
    return;
  }

144-147: ⚠️ Potential issue

Improve error handling in numfields parsing

The condition for parsing numfields_ should be adjusted to correctly detect parsing errors.

Consider modifying the condition to:

-if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) == 0) {
+if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) != 1) {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

  if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) != 1) {
    res_.SetRes(CmdRes::kInvalidInt);
    return;
  }

178-181: ⚠️ Potential issue

Improve error handling in numfields parsing

The condition for parsing numfields_ should be adjusted to correctly detect parsing errors.

Consider modifying the condition to:

-if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) == 0) {
+if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) != 1) {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

  if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) != 1) {
    res_.SetRes(CmdRes::kInvalidInt);
    return;
  }

110-113: ⚠️ Potential issue

Improve error handling in numfields parsing

The condition for parsing numfields_ should be adjusted to correctly detect parsing errors.

Consider modifying the condition to:

-if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) == 0) {
+if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) != 1) {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

  if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) != 1) {
    res_.SetRes(CmdRes::kInvalidInt);
    return;
  }

271-279: ⚠️ Potential issue

Add expiration time parsing to PKHSetexCmd::DoInitial

The PKHSetexCmd::DoInitial function is missing the parsing of the expiration time, which is a crucial part of the Setex command.

Consider adding the expiration time parsing:

 void PKHSetexCmd::DoInitial() {
   if (!CheckArg(argv_.size())) {
     res_.SetRes(CmdRes::kWrongNum, kCmdNamePKHSetex);
     return;
   }
   key_ = argv_[1];
-  field_ = argv_[2];
-  value_ = argv_[3];
+  if (pstd::string2int(argv_[2].data(), argv_[2].size(), &ttl_) != 1) {
+    res_.SetRes(CmdRes::kInvalidInt);
+    return;
+  }
+  field_ = argv_[3];
+  value_ = argv_[4];
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

void PKHSetexCmd::DoInitial() {
  if (!CheckArg(argv_.size())) {
    res_.SetRes(CmdRes::kWrongNum, kCmdNamePKHSetex);
    return;
  }
  key_ = argv_[1];
  if (pstd::string2int(argv_[2].data(), argv_[2].size(), &ttl_) != 1) {
    res_.SetRes(CmdRes::kInvalidInt);
    return;
  }
  field_ = argv_[3];
  value_ = argv_[4];
}

281-292: ⚠️ Potential issue

Implement proper PKHSetex functionality

The PKHSetexCmd::Do function is not implementing the Setex functionality correctly. It's using PKHSet instead of a PKHSetex method, and it's not using the expiration time.

Consider implementing the correct Setex functionality:

  1. Implement a PKHSetex method in the storage engine if it doesn't exist.
  2. Use the PKHSetex method instead of PKHSet.
  3. Pass the expiration time to the method.

Example implementation:

 void PKHSetexCmd::Do() {
   int32_t ret = 0;
-  s_ = db_->storage()->PKHSet(key_, field_, value_, &ret);
+  s_ = db_->storage()->PKHSetex(key_, ttl_, field_, value_, &ret);
   if (s_.ok()) {
     res_.AppendContent(":" + std::to_string(ret));
     AddSlotKey("h", key_, db_);
   } else if (s_.IsInvalidArgument()) {
     res_.SetRes(CmdRes::kMultiKey);
+  } else if (s_.IsNotFound()) {
+    res_.SetRes(CmdRes::kNotFound);
+  } else if (s_.IsCorruption()) {
+    res_.SetRes(CmdRes::kErrOther, "Corruption error: " + s_.ToString());
   } else {
     res_.SetRes(CmdRes::kErrOther, s_.ToString());
   }
 }

Also, ensure that the ttl_ member is added to the PKHSetexCmd class and properly initialized in the DoInitial method.

Committable suggestion was skipped due to low confidence.

src/storage/include/storage/storage.h (3)

415-463: 🛠️ Refactor suggestion

New PKH methods look good, but consider removing redundant parameter

The new Pika Hash (PKH) methods are a comprehensive addition to the storage system, providing a wide range of operations for hash data structures. The naming convention is consistent, and the functionality appears to cover all necessary operations.

However, there's a potential optimization:

Consider removing the numfields parameter from methods like PKHExpire, PKHExpireat, etc. This parameter is redundant as the number of fields can be obtained from fields.size(). This change would simplify the method signatures and prevent potential inconsistencies.

For example, change:

Status PKHExpire(const Slice& key, int32_t ttl, int32_t numfields, const std::vector<std::string>& fields, std::vector<int32_t>* rets);

to:

Status PKHExpire(const Slice& key, int32_t ttl, const std::vector<std::string>& fields, std::vector<int32_t>* rets);

Apply this change to all similar method signatures.


269-270: 🛠️ Refactor suggestion

Consider using pointers for output parameters consistently

The Append method has been modified to use a reference for the out_new_value parameter. For consistency across the codebase, consider using pointers for output parameters instead of references.

Change the signature of Append to use a pointer:

Status Append(const Slice& key, const Slice& value, int32_t* ret, int64_t* expired_timestamp_sec,
              std::string* out_new_value);

This change would make it consistent with other methods in the class that use pointers for output parameters.


279-280: 🛠️ Refactor suggestion

Consider using pointers for output parameters consistently

Similar to the Append method, the BitOp method uses a reference for the value_to_dest parameter. For consistency, consider changing this to a pointer.

Modify the BitOp method signature:

Status BitOp(BitOpType op, const std::string& dest_key, const std::vector<std::string>& src_keys,
             std::string* value_to_dest, int64_t* ret);

This change would improve consistency with other methods in the class.

src/pika_command.cc (4)

515-515: ⚠️ Potential issue

Translate comment to English and address the TODO

Please translate the comment to English and resolve the TODO by confirming the correct arity for the PKHSetexCmd.

Replace the current comment with:

// TODO: Verify if the arity should be -6

After verifying the correct arity, update the command initialization accordingly and remove the TODO comment.


543-543: ⚠️ Potential issue

Translate comment to English and clarify command parameters

Please translate the comment to English and verify the correct parameters for the PKHMSetexCmd.

Replace the current comment with:

// TODO: Verify if the arity should be -6 and if TTL is required

After verifying the correct arity and parameters, update the command initialization accordingly and remove the TODO comment.


555-559: ⚠️ Potential issue

Address TODOs and clarify command performance for PKHValsCmd

Please address the TODO comments and clarify why the PKHVals command is marked as slow.

  1. Translate and address the first TODO:
    Replace // TODO(DDD) 为啥vals是慢的命令。 with an explanation in English about why this command is marked as slow.

  2. Translate and address the second TODO:
    Replace // TODO(DDD) 这些标志位都是啥意思。 with a comment explaining the meaning of the flags used.

After addressing these TODOs, remove the comments if they are no longer necessary.


499-501: ⚠️ Potential issue

Correct command flags for PKHExpiretimeCmd

The PKHExpiretime command is currently flagged with kCmdFlagsWrite, but it should be a read operation. Please update the flags to use kCmdFlagsRead instead.

Apply this diff to correct the flags:

std::unique_ptr<Cmd> ehexpiretimeptr = std::make_unique<PKHExpiretimeCmd>(
-     kCmdNamePKHExpiretime, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
+     kCmdNamePKHExpiretime, -4, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

  std::unique_ptr<Cmd> ehexpiretimeptr = std::make_unique<PKHExpiretimeCmd>(
      kCmdNamePKHExpiretime, -4, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
  cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHExpiretime, std::move(ehexpiretimeptr)));
src/storage/src/redis_pkhashes.cc (4)

220-223: ⚠️ Potential issue

Clarify the error message in PKHExpireat for invalid timestamps.

The condition if (timestamp <= 0) checks for timestamps less than or equal to zero, but the error message states "must be >= 0", which is inconsistent. It should state "must be > 0" to match the condition.

Apply this diff to correct the error message:

-  if (timestamp <= 0) {
-    rets->assign(numfields, 2);
-    return Status::InvalidArgument("invalid expire time, must be >= 0");
-  }
+  if (timestamp <= 0) {
+    rets->assign(numfields, 2);
+    return Status::InvalidArgument("invalid expire time, must be > 0");
+  }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

  if (timestamp <= 0) {
    rets->assign(numfields, 2);
    return Status::InvalidArgument("invalid expire time, must be > 0");
  }

158-160: ⚠️ Potential issue

Correct the TTL validation error message in PKHExpire.

The condition if (ttl <= 0) invalidates ttl values less than or equal to zero, but the error message says "must be >= 0", which is misleading. Since ttl = 0 is considered invalid, the error message should state "must be > 0" to accurately reflect the condition.

Apply this diff to fix the error message:

-  if (ttl <= 0) {
-    return Status::InvalidArgument("invalid expire time, must be >= 0");
-  }
+  if (ttl <= 0) {
+    return Status::InvalidArgument("invalid expire time, must be > 0");
+  }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

  if (ttl <= 0) {
    return Status::InvalidArgument("invalid expire time, must be > 0");
  }

466-468: ⚠️ Potential issue

Enhance the error message in PKHSetex for TTL validation.

When ttl is less than or equal to zero, the function returns an InvalidArgument status with a generic message "invalid expire time". To aid users, the error message should specify the valid range for ttl.

Apply this diff to improve the error message:

-  if (ttl <= 0) {
-    return Status::InvalidArgument("invalid expire time");
-  }
+  if (ttl <= 0) {
+    return Status::InvalidArgument("invalid expire time, ttl must be > 0");
+  }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

  if (ttl <= 0) {
    return Status::InvalidArgument("invalid expire time, ttl must be > 0");
  }

672-681: 🛠️ Refactor suggestion

Handle non-existent fields consistently in PKHStrlen.

When the field does not exist, PKHStrlen sets *len to 0 but returns the error status from PKHGet, which may be NotFound. Consider returning Status::OK() in this case to indicate that the operation was successful, and the length is 0.

Apply this diff to adjust the return status:

 Status Redis::PKHStrlen(const Slice& key, const Slice& field, int32_t* len) {
   std::string value;
   Status s = PKHGet(key, field, &value);
   if (s.ok()) {
     *len = static_cast<int32_t>(value.size());
+    return Status::OK();
   } else {
     *len = 0;
-    return s;
+    return Status::OK();
   }
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Status Redis::PKHStrlen(const Slice& key, const Slice& field, int32_t* len) {
  std::string value;
  Status s = PKHGet(key, field, &value);
  if (s.ok()) {
    *len = static_cast<int32_t>(value.size());
    return Status::OK();
  } else {
    *len = 0;
    return Status::OK();
  }
}
src/storage/tests/pkhashes_test.cc (3)

95-103: ⚠️ Potential issue

Avoid using hardcoded sleep durations in tests

Using std::this_thread::sleep_for with fixed durations can lead to flaky tests on systems with varying performance characteristics. Tests may intermittently fail if the sleep duration is insufficient due to system load.

Consider using time mocking techniques or condition variables to simulate the passage of time more reliably. This will enhance test robustness and reduce false negatives.


115-126: ⚠️ Potential issue

Ensure return statuses are checked after function calls

In multiple test cases, the return status s from function calls such as db.PKHExpire, db.PKHExpireat, db.PKHPersist, db.PKHTTL, and others is not consistently checked before proceeding. Ignoring the return status may lead to undetected errors and misleading test results.

Please ensure that you verify s.ok() after each function call to handle potential errors appropriately.

Also applies to: 152-176, 190-231, 242-272, 285-314, 339-383, 386-403, 406-469, 473-487, 491-526, 530-627, 630-721, 725-762, 765-836, 839-926, 930-991, 994-1056, 1059-1074, 1077-1478


877-878: 🛠️ Refactor suggestion

Use safer string-to-integer conversion methods

The use of atoll(str_value.data()) for string-to-integer conversion is considered unsafe and less idiomatic in modern C++. It does not provide proper error handling and may lead to undefined behavior if the string is not a valid representation of an integer.

Consider using std::stoll(str_value) instead, which throws an exception on invalid input and is more in line with C++ best practices.

Apply this diff to update the conversions:

- ASSERT_EQ(atoll(str_value.data()), 1000);
+ ASSERT_EQ(std::stoll(str_value), 1000);

- ASSERT_EQ(atoll(str_value.data()), 100);
+ ASSERT_EQ(std::stoll(str_value), 100);

- ASSERT_EQ(atoll(str_value.data()), 200);
+ ASSERT_EQ(std::stoll(str_value), 200);

Also applies to: 892-893, 904-905

src/storage/src/storage.cc (1)

128-128: ⚠️ Potential issue

Ensure the method is thread-safe.

Since this method returns a reference to a shared resource (insts_), it should be made thread-safe to avoid potential data races or undefined behavior in concurrent scenarios.

Consider adding proper synchronization mechanisms, such as locks or atomic operations, to ensure thread safety when accessing and modifying the insts_ vector.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 11

🧹 Outside diff range and nitpick comments (16)
src/storage/include/storage/storage_define.h (1)

124-132: Improved formatting and retained TODO comment.

The reformatting of the SeekUserkeyDelim function enhances readability without altering its functionality. This is a positive change.

However, the retained TODO comment suggests an area for potential improvement:

// TODO: handle invalid format

Would you like assistance in addressing this TODO comment? I can help draft a solution for handling invalid formats or create a GitHub issue to track this task.

src/storage/src/base_value_format.h (2)

21-31: LGTM! Consider adding a comment for kPKHashes.

The changes to the DataType enum are consistent with the PR objectives. The addition of kPKHashes and the updates to kNones and kAll maintain the enum's integrity.

Consider adding a brief comment explaining the purpose of kPKHashes for better code documentation:

 enum class DataType : uint8_t {
   kStrings = 0,
   kHashes = 1,
   kSets = 2,
   kLists = 3,
   kZSets = 4,
   kStreams = 5,
+  // Persistent Key-Hash type for new hash commands
   kPKHashes = 6,
   kNones = 7,
   kAll = 8,
 };

Line range hint 100-167: LGTM! Consider consistent brace style for single-line methods.

The formatting changes improve code consistency and readability. The condensed IsValid method is more concise while maintaining the same functionality.

For consistency with other single-line methods in the class, consider using braces for the IsValid method:

-  virtual bool IsValid() { return !IsStale(); }
+  virtual bool IsValid() { return !IsStale(); }
src/storage/src/redis.h (2)

254-294: LGTM: New PK Hash Commands added.

The new PK Hash Commands are well-structured and consistent with the existing codebase. They provide a comprehensive set of operations for PK Hash data structures.

Consider adding inline documentation for the new parameters in these methods, especially for numfields and fields in methods like PKHExpire, to clarify their purpose and usage.


275-275: LGTM: Consistent addition of prefetch_meta parameter.

The addition of the prefetch_meta parameter to multiple methods is consistent and maintains backward compatibility with its default value.

Consider adding documentation to explain the purpose and usage of the prefetch_meta parameter. This will help developers understand when and how to use this new feature.

Also applies to: 380-380, 391-391

src/storage/include/storage/storage.h (2)

415-463: New Pika Hash (PKH) methods added

Several new methods for Pika Hash operations have been added, including:

  • PKHExpire, PKHExpireat, PKHExpiretime, PKHPersist, PKHTTL
  • PKHSet, PKHGet, PKHSetex, PKHExists, PKHDel, PKHLen, PKHStrlen
  • PKHIncrby, PKHMSet, PKHMSetex, PKHMGet, PKHKeys, PKHVals, PKHGetall, PKHScan

These methods provide a comprehensive set of operations for working with hash data structures, including support for field-level expiration.

However, there are a few commented-out method declarations (PKHLenForce and PKHScanx). Consider removing these if they are not needed, or uncomment and implement them if they are required.


Line range hint 1-1154: General improvements and considerations

  1. File organization: Consider splitting this large header file into smaller, more focused files (e.g., one for each data type like strings, hashes, sets, etc.). This would improve maintainability and readability.

  2. Documentation: Many methods lack comments explaining their purpose, parameters, and return values. Consider adding consistent documentation for all public methods to improve code understandability.

  3. Consistent naming: Ensure consistent naming conventions across all methods. For example, some methods use camelCase (e.g., ZRangebyscore), while others use snake_case (e.g., PKH_expire). Stick to one convention throughout the file.

  4. Error handling: Consider using a consistent approach for error handling across all methods. Some methods return Status, while others return int32_t or int64_t for error conditions.

  5. Use of modern C++ features: Consider using more modern C++ features like nullptr instead of NULL, override for virtual functions, and [[nodiscard]] attribute for methods returning important values.

  6. Const correctness: Ensure that methods that don't modify the object are marked as const.

  7. Parameter passing: For large objects, consider passing by const reference instead of by value to improve performance.

src/storage/src/redis_hashes.cc (3)

Line range hint 66-71: Improved field filtering logic.

The changes improve the efficiency of filtering unique fields. However, we can further optimize it by reserving space for the vectors.

Consider reserving space for filtered_fields to avoid potential reallocations:

 std::vector<std::string> filtered_fields;
 std::unordered_set<std::string> field_set;
+filtered_fields.reserve(fields.size());
+field_set.reserve(fields.size());
 for (const auto& iter : fields) {
   const std::string& field = iter;
   if (field_set.find(field) == field_set.end()) {
🧰 Tools
🪛 cppcheck

[performance] 69-69: Searching before insertion is not necessary.

(stlFindInsert)


91-93: Improved error message formatting.

The error message has been reformatted for better readability. However, consider using a formatting library like fmt for improved performance and maintainability.

Consider using the fmt library for string formatting:

-      return Status::InvalidArgument("WRONGTYPE, key: " + key.ToString() +
-                                     ", expect type: " + DataTypeStrings[static_cast<int>(DataType::kHashes)] +
-                                     ", get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]);
+      return Status::InvalidArgument(fmt::format("WRONGTYPE, key: {}, expect type: {}, get type: {}",
+                                     key.ToString(),
+                                     DataTypeStrings[static_cast<int>(DataType::kHashes)],
+                                     DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]));

Line range hint 1-1354: Overall assessment of changes in redis_hashes.cc

The modifications in this file primarily focus on two areas:

  1. Improving error message formatting for consistency and readability.
  2. Adding checks for hash size overflow to prevent potential issues.

These changes enhance the overall quality and robustness of the code. However, there are opportunities for further improvement:

  1. Consider using a formatting library like fmt for string formatting throughout the file. This would improve performance and maintainability.
  2. In some functions, such as HDel, consider reserving space for vectors to optimize memory allocation.

The consistent application of these changes across multiple functions demonstrates a systematic approach to code improvement. Overall, these changes are approved and contribute positively to the codebase.

To further improve the code:

  1. Implement a centralized error message formatting function using the fmt library.
  2. Consider creating a utility function for checking hash size overflow, as this check is repeated in multiple places.
  3. Review the entire codebase for similar patterns that could benefit from these improvements.
src/pika_command.cc (2)

56-57: Improve code formatting for consistency

There are some inconsistencies in the formatting of command initializations. Consider aligning the parameters for better readability.

Apply this formatting to all command initializations for consistency:

std::unique_ptr<Cmd> cmdptr = std::make_unique<CmdType>(
    kCmdNameCmd, arity, flags);

Also applies to: 60-61, 80-81, 85-86


Line range hint 1019-1058: Simplify cache-related logic in DoCommand method

The cache-related logic in the DoCommand method is a bit complex and could be simplified for better readability and maintainability.

Consider refactoring this section to reduce nesting and improve clarity. Here's a suggested structure:

void Cmd::DoCommand(const HintKeys& hint_keys) {
  if (!IsNeedCacheDo() || PIKA_CACHE_NONE == g_pika_conf->cache_mode() ||
      db_->cache()->CacheStatus() != PIKA_CACHE_STATUS_OK) {
    Do();
    return;
  }

  if (!cache_missed_in_rtc_ && IsNeedReadCache()) {
    ReadCache();
  }

  if (is_read() && (res().CacheMiss() || cache_missed_in_rtc_)) {
    pstd::lock::MultiScopeRecordLock record_lock(db_->LockMgr(), current_key());
    DoThroughDB();
    if (IsNeedUpdateCache()) {
      DoUpdateCache();
    }
  } else if (is_write()) {
    DoThroughDB();
    if (IsNeedUpdateCache()) {
      DoUpdateCache();
    }
  }
}
src/storage/src/storage.cc (4)

Line range hint 1100-1109: Potential misuse of auto in lambda parameter

In the usage of std::for_each, the lambda function captures auto kv. Depending on the C++ standard your project adheres to, using auto in lambda parameters requires C++14 or later. If your codebase targets an earlier standard like C++11, this may cause compatibility issues.

Consider specifying the type explicitly:

-std::for_each(value_to_dest.begin(), value_to_dest.end(),
-              [&score_members](auto kv) { score_members.emplace_back(kv.second, kv.first); });
+std::for_each(value_to_dest.begin(), value_to_dest.end(),
+              [&score_members](const std::pair<std::string, double>& kv) { score_members.emplace_back(kv.second, kv.first); });

Line range hint 1457-1462: Possible incorrect comparison in loop condition

In the PKScanRange method, the loop condition compares miter.Key() with key_end.ToString() using <=, which may not function as intended if key_end is empty (indicating no limit). Also, the use of ToString() may create unnecessary copies.

Ensure that the loop condition correctly handles empty key_end and optimizes string comparisons:

-while (miter.Valid() && limit > 0 && (end_no_limit || miter.Key().compare(key_end.ToString()) <= 0)) {
+while (miter.Valid() && limit > 0 && (end_no_limit || miter.Key() <= key_end)) {

This change assumes that key_end is a std::string and that comparison operators are overloaded appropriately.


Line range hint 1519-1529: Inefficient passing of integer by const reference

In the PKPatternMatchDelWithRemoveKeys method, the parameter const int64_t& max_count passes a primitive type by const reference. Since int64_t is a simple data type, it's more efficient to pass it by value.

Consider changing the parameter to pass by value:

-Status Storage::PKPatternMatchDelWithRemoveKeys(const std::string& pattern, int64_t* ret,
-                                                std::vector<std::string>* remove_keys, const int64_t& max_count) {
+Status Storage::PKPatternMatchDelWithRemoveKeys(const std::string& pattern, int64_t* ret,
+                                                std::vector<std::string>* remove_keys, int64_t max_count) {

540-544: Remove commented-out code or explain its necessity

There is commented-out code for the PKHLenForce method. Leaving unused code can clutter the codebase and may cause confusion.

Consider removing the commented code if it's no longer needed, or add a comment explaining why it's retained.

-// Status Storage::PKHLenForce(const Slice& key, int32_t* ret) {
-//   auto& inst = GetDBInstance(key);
-//   return inst->PKHLenForce(key, ret);
-// }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 8eb37e8 and 378f5a3.

📒 Files selected for processing (19)
  • CMakeLists.txt (1 hunks)
  • include/acl.h (1 hunks)
  • include/pika_command.h (5 hunks)
  • include/pika_pkhash.h (1 hunks)
  • src/pika_client_conn.cc (1 hunks)
  • src/pika_command.cc (10 hunks)
  • src/pika_pkhash.cc (1 hunks)
  • src/storage/include/storage/storage.h (18 hunks)
  • src/storage/include/storage/storage_define.h (2 hunks)
  • src/storage/src/base_filter.h (6 hunks)
  • src/storage/src/base_value_format.h (5 hunks)
  • src/storage/src/pkhash_data_value_format.h (1 hunks)
  • src/storage/src/redis.cc (12 hunks)
  • src/storage/src/redis.h (17 hunks)
  • src/storage/src/redis_hashes.cc (35 hunks)
  • src/storage/src/redis_pkhashes.cc (1 hunks)
  • src/storage/src/redis_strings.cc (40 hunks)
  • src/storage/src/storage.cc (34 hunks)
  • src/storage/tests/pkhashes_test.cc (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • src/pika_client_conn.cc
🚧 Files skipped from review as they are similar to previous changes (7)
  • CMakeLists.txt
  • include/pika_command.h
  • include/pika_pkhash.h
  • src/pika_pkhash.cc
  • src/storage/src/pkhash_data_value_format.h
  • src/storage/src/redis_strings.cc
  • src/storage/tests/pkhashes_test.cc
🧰 Additional context used
🪛 cppcheck
src/storage/src/redis_pkhashes.cc

[performance] 568-568: Searching before insertion is not necessary.

(stlFindInsert)


[performance] 777-777: Searching before insertion is not necessary.

(stlFindInsert)


[performance] 860-860: Searching before insertion is not necessary.

(stlFindInsert)

🔇 Additional comments (69)
src/storage/include/storage/storage_define.h (2)

47-47: LGTM: New column family index added for PKHash data.

The addition of kPKHashDataCF = 7 to the ColumnFamilyIndex enum is consistent with the PR objectives and follows the correct sequential numbering.


135-135: LGTM: Added newline at end of file.

Adding a blank line at the end of the file is a good practice. It ensures the file ends with a newline character, which can prevent issues with certain tools and adheres to common coding standards.

src/storage/src/base_value_format.h (2)

34-35: LGTM! Arrays updated correctly.

The DataTypeTag and DataTypeStrings arrays have been properly updated to include the new kPKHashes type. The changes maintain consistency with the DataType enum modifications.


52-55: LGTM! Improved formatting and initialization.

The formatting changes enhance code readability. The initialization of ctime_ in the constructor ensures that the creation time is set correctly for each InternalValue instance.

src/storage/src/base_filter.h (8)

60-60: LGTM. Improved logging consistency.

The changes to the DEBUG logging statements improve readability and ensure consistent formatting across different data types.

Also applies to: 64-66


115-115: LGTM. Minor formatting improvement.

The condensed constructor initialization list improves code conciseness without affecting functionality.


158-160: LGTM. Added support for PKHashes data type.

The addition of PKHashes to the list of supported data types is consistent with the existing code structure and extends the filter's functionality.

Consider adding a test case to verify the correct handling of PKHashes in the BaseDataFilter.


230-231: LGTM. Improved constructor readability.

The reformatting of the BaseDataFilterFactory constructor parameters improves readability without affecting functionality.


250-253: LGTM. Added type aliases for PKHashes filters.

The addition of type aliases for PKHashes filters is consistent with the existing pattern for other data types and provides convenient shorthand for PKHashes-specific filters.

Consider adding a brief comment explaining the purpose of these aliases, especially if they're intended to be used in other parts of the codebase.


254-254: LGTM. Improved readability.

The addition of a blank line after the new type aliases improves code readability by clearly separating different groups of aliases.


Line range hint 1-274: Overall LGTM. PKHashes support added successfully.

The changes in this file successfully introduce support for the new PKHashes data type while maintaining consistency with the existing code structure. The modifications also include several formatting improvements that enhance code readability. All changes have been reviewed and approved.

Key points:

  1. New include statements added (verify dependencies).
  2. Support for PKHashes added to BaseDataFilter.
  3. New type aliases for PKHashes filters introduced.
  4. Various formatting improvements throughout the file.

Consider adding a test case for PKHashes handling in BaseDataFilter and brief documentation for the new type aliases.


16-19: LGTM. Verify new dependencies.

The addition of new include statements looks good. It suggests the introduction of new dependencies or code reorganization.

Please ensure that these new dependencies are necessary and don't introduce any circular dependencies. Run the following script to verify the includes:

✅ Verification successful

Verification Successful: New Dependencies are Required

The new include statements are necessary as the symbols BaseKey, BaseValue, and DEBUG are actively used in the code.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the necessity of new include statements

# Test: Check for the usage of symbols from the new headers
rg --type cpp -e 'BaseKey' -e 'BaseValue' -e 'DEBUG' src/storage/src/base_filter.h

Length of output: 920

include/acl.h (1)

55-55: LGTM: New AclCategory enum value added correctly.

The addition of the PKHASH enum value to the AclCategory enum class is implemented correctly. It follows the existing pattern and uses the next available bit in the sequence.

To ensure this new category is properly handled throughout the codebase, please run the following script:

This script will help identify areas where the new PKHASH category should be considered or handled.

src/storage/src/redis.h (4)

17-19: LGTM: New includes and PKHash handle retrieval method added.

The new include statements and the GetPKHashCFHandles() method are consistent with the existing codebase structure and support the newly added PK Hash functionality.

Also applies to: 248-250


468-469: LGTM: PK Hashes added to stale data check.

The ExpectedStale method has been correctly updated to include the new DataType::kPKHashes, ensuring consistent behavior for stale data checks across all data types.


Line range hint 416-447: Verify if PK Hashes require iterator support.

The CreateIterator method hasn't been updated to include support for PK Hashes. While it's possible that PK Hashes don't require a separate iterator, it would be good to confirm this to ensure consistency across all data types.

Could you please clarify if PK Hashes require iterator support? If they do, consider adding a case for PK Hashes in the CreateIterator method, similar to other data types.


Line range hint 1-557: Summary of changes: PK Hash support added with minor enhancements.

The changes in this file primarily introduce support for PK Hash operations, including new method declarations and updates to existing methods. The additions are well-structured and consistent with the existing codebase. A few suggestions for documentation improvements have been made, and clarification is needed regarding iterator support for PK Hashes. Overall, these changes enhance the functionality of the system while maintaining consistency with the existing design.

src/storage/include/storage/storage.h (3)

124-129: New struct FieldValueTTL added

The new FieldValueTTL struct extends the existing FieldValue struct by adding a ttl field. This is a good addition for supporting expiration-related operations on hash fields.


269-270: Updated method signatures

Several method signatures have been updated:

  1. Append: Added expired_timestamp_sec and out_new_value parameters.
  2. BitOp: Changed int64_t* ret to std::string& value_to_dest, int64_t* ret.
  3. SDiffstore, SInterstore, SUnionstore: Added std::vector<std::string>& value_to_dest parameter.
  4. SMembersWithTTL: Added int64_t *ttl parameter.
  5. LRangeWithTTL: Added int64_t *ttl parameter.
  6. ZRangeWithTTL: Added int64_t *ttl parameter.
  7. XInfo: Changed StreamInfoResult &result to StreamInfoResult* result.
  8. PKPatternMatchDelWithRemoveKeys: Added const int64_t& max_count parameter.

These updates improve the functionality of the methods by allowing for more precise control and additional data retrieval. The changes seem appropriate and consistent with the overall design.

Also applies to: 279-280, 495-496, 519-520, 529-529, 579-580, 603-603, 1013-1013, 1048-1049


1151-1152: ⚠️ Potential issue

Correct the spelling of EnableDymayticOptions to EnableDynamicOptions

The method name EnableDymayticOptions contains a typographical error. Please correct it to EnableDynamicOptions to improve code readability and prevent confusion.

Apply the following change:

-Status EnableDymayticOptions(const OptionType& option_type, const std::string& db_type,
+Status EnableDynamicOptions(const OptionType& option_type, const std::string& db_type,
                              const std::unordered_map<std::string, std::string>& options);

Ensure all references to this method are updated accordingly.

Likely invalid or redundant comment.

src/storage/src/redis_hashes.cc (18)

14-15: Include statements added appropriately.

The addition of base_data_key_format.h and base_data_value_format.h headers is appropriate and likely necessary for the functionality in this file.


118-121: Improved error handling for hash size overflow.

The addition of a check for hash size overflow enhances the robustness of the HDel function. This prevents potential issues with hash size manipulation.


154-156: Consistent error message formatting.

The error message formatting has been improved for consistency. As mentioned earlier, consider using the fmt library for string formatting throughout the codebase.


193-195: Consistent error message formatting.

The error message formatting has been improved for consistency. As previously suggested, consider using the fmt library for string formatting throughout the codebase.


235-237: Consistent error message formatting.

The error message formatting has been improved for consistency. As previously suggested, consider using the fmt library for string formatting throughout the codebase.


291-293: Consistent error message formatting.

The error message formatting has been improved for consistency. As previously suggested, consider using the fmt library for string formatting throughout the codebase.


Line range hint 329-333: Improved error handling for hash size overflow in HIncrby.

The addition of a check for hash size overflow enhances the robustness of the HIncrby function. This prevents potential issues with hash size manipulation.


382-384: Consistent error message formatting.

The error message formatting has been improved for consistency. As previously suggested, consider using the fmt library for string formatting throughout the codebase.


Line range hint 421-426: Improved error handling for hash size overflow in HIncrbyfloat.

The addition of a check for hash size overflow enhances the robustness of the HIncrbyfloat function. This prevents potential issues with hash size manipulation.


465-467: Consistent error message formatting.

The error message formatting has been improved for consistency. As previously suggested, consider using the fmt library for string formatting throughout the codebase.


506-508: Consistent error message formatting.

The error message formatting has been improved for consistency. As previously suggested, consider using the fmt library for string formatting throughout the codebase.


545-547: Consistent error message formatting.

The error message formatting has been improved for consistency. As previously suggested, consider using the fmt library for string formatting throughout the codebase.


608-610: Consistent error message formatting.

The error message formatting has been improved for consistency. As previously suggested, consider using the fmt library for string formatting throughout the codebase.


645-648: Improved error handling for hash size overflow in HMSet.

The addition of a check for hash size overflow enhances the robustness of the HMSet function. This prevents potential issues with hash size manipulation.


682-684: Consistent error message formatting.

The error message formatting has been improved for consistency. As previously suggested, consider using the fmt library for string formatting throughout the codebase.


Line range hint 712-717: Improved error handling for hash size overflow in HSet.

The addition of a check for hash size overflow enhances the robustness of the HSet function. This prevents potential issues with hash size manipulation.


756-758: Consistent error message formatting.

The error message formatting has been improved for consistency. As previously suggested, consider using the fmt library for string formatting throughout the codebase.


Line range hint 778-783: Improved error handling for hash size overflow in HSetnx.

The addition of a check for hash size overflow enhances the robustness of the HSetnx function. This prevents potential issues with hash size manipulation.

src/pika_command.cc (18)

485-489: New PKHash commands added

The PKHSet command has been added to the command table. This is part of the new "Pika Expire Hash" functionality.

However, consider adding a brief comment explaining the purpose of the "Pika Expire Hash" commands for better code documentation.


490-493: PKHExpire command added

The PKHExpire command has been implemented, allowing for expiration of hash keys.


494-497: PKHExpireat command added

The PKHExpireat command has been implemented, allowing for setting expiration times for hash keys.


498-501: PKHExpiretime command added

The PKHExpiretime command has been implemented, likely for retrieving the expiration time of hash keys.


507-510: PKHPersist command added

The PKHPersist command has been implemented, likely for removing the expiration from a hash key.


511-514: PKHGet command added

The PKHGet command has been implemented for retrieving values from the hash.


516-518: PKHSetex command added

The PKHSetex command has been implemented, likely for setting a value in the hash with an expiration time.


520-522: PKHExists command added

The PKHExists command has been implemented for checking if a field exists in the hash.


524-526: PKHDel command added

The PKHDel command has been implemented for deleting fields from the hash.


528-530: PKHLen command added

The PKHLen command has been implemented for getting the number of fields in the hash.


532-534: PKHStrLen command added

The PKHStrLen command has been implemented for getting the length of a hash field's value.


536-538: PKHIncrby command added

The PKHIncrby command has been implemented for incrementing the value of a hash field.


540-542: PKHMSet command added

The PKHMSet command has been implemented for setting multiple hash fields.


548-550: PKHMGet command added

The PKHMGet command has been implemented for retrieving multiple hash fields.


552-554: PKHKeys command added

The PKHKeys command has been implemented for retrieving all field names in the hash.


561-563: PKHGetAll command added

The PKHGetAll command has been implemented for retrieving all fields and values in the hash.


565-567: PKHScan command added

The PKHScan command has been implemented for iterating over the hash.


Line range hint 1-1170: Summary of changes and recommendations

The file src/pika_command.cc has been updated with new PKHash (Pika Expire Hash) commands, significantly expanding the functionality of the Pika database. Here are the key points and recommendations:

  1. New PKHash commands have been successfully implemented, including PKHSet, PKHExpire, PKHExpireat, PKHExpiretime, PKHTTLCmd, PKHPersist, PKHGet, PKHSetex, PKHExists, PKHDel, PKHLen, PKHStrLen, PKHIncrby, PKHMSet, PKHMSetex, PKHMGet, PKHKeys, PKHVals, PKHGetAll, and PKHScan.

  2. The PKHTTLCmd is currently set with write flags, which should be changed to read flags as it's typically a read operation.

  3. There are some TODO comments in Chinese that need to be translated and addressed, particularly for the PKHVals and PKHMSetex commands.

  4. Consider adding brief comments explaining the purpose of the "Pika Expire Hash" commands for better code documentation.

  5. There are minor formatting inconsistencies in command initializations. Consider aligning parameters for better readability.

  6. The cache-related logic in the DoCommand method could be simplified for better readability and maintainability.

Recommendations:

  • Address the issues mentioned in the review comments, particularly the flag change for PKHTTLCmd and the TODO comments.
  • Improve code formatting for consistency.
  • Consider refactoring the DoCommand method to simplify the cache-related logic.
  • Add more comments explaining the purpose and functionality of the new PKHash commands.

Overall, the changes appear to be a solid implementation of the new PKHash functionality, but addressing these minor issues will improve the code quality and maintainability.

src/storage/src/redis.cc (7)

30-31: Constructor Initialization Formatting Improves Readability

The adjustment of the constructor's initialization list enhances code readability and follows standard C++ formatting conventions.


181-182: Method Signature Formatting Enhances Readability

Reformatting the parameters of GetScanStartPoint improves readability and conforms to the project's coding style for line breaks in method signatures.


218-218: Include 'pika_hash_data_cf' in Compaction Range

Adding handles_[kPKHashDataCF] to the CompactRange method ensures that the new pika_hash_data_cf column family is included in compaction operations. This is essential for maintaining database performance and storage efficiency for the new hash data type.


265-266: Method Signature Formatting Improves Clarity

The reformatting of AddCompactKeyTaskIfNeeded enhances the clarity of the method signature, making it easier to read and understand the parameters.


293-495: Enhanced Retrieval of RocksDB Statistics

The restructuring of GetRocksDBInfo significantly improves the collection and presentation of RocksDB statistics. The use of lambda functions like write_aggregated_int_property, write_property, and write_ticker_count promotes code reusability and cleaner organization. Including detailed metrics on blob files, compaction, memtables, and various cache usages provides valuable insights for monitoring and debugging.


498-498: Inlined 'SetWriteWalOptions' Method Enhances Conciseness

Inlining the SetWriteWalOptions method improves code conciseness without compromising readability. It simplifies the method definition for a straightforward assignment operation.


102-111: Ensure Proper Integration of 'pika_hash_data_cf' Column Family

The addition of the 'pika_hash_data_cf' column family is a significant change. Please ensure that:

  • The DataType used in HashesDataFilterFactory corresponds to the new data type for PKHash, if applicable.
  • The handles_ vector correctly accommodates the new column family, and any indices used elsewhere in the code (e.g., handles_[kPKHashDataCF]) match the order of column families added.
  • All necessary configurations specific to pika_hash_data_cf are properly set to avoid runtime issues.

Run the following script to confirm that kPKHashDataCF is properly defined and used consistently:

src/storage/src/storage.cc (6)

284-285: Ensure consistent parameter passing style for output parameters

In the Append method, out_new_value is passed as a non-const reference (std::string&). For consistency with other methods that use pointers for output parameters (e.g., std::string* value), consider changing out_new_value to a pointer (std::string* out_new_value).


480-484: Use int64_t for TTL parameter to support larger values

The PKHExpire method uses int32_t ttl as the TTL parameter. To handle larger TTL values and maintain consistency with other methods, consider changing ttl to int64_t.


550-553: Use int64_t for TTL parameter in PKHIncrby

The PKHIncrby method uses int32_t ttl as the TTL parameter. To support larger TTL values and ensure consistency with other methods like Setex, consider changing ttl to int64_t.


1289-1291: Ensure consistent parameter passing style for output parameters

The XInfo method uses a non-const reference (StreamInfoResult& result) as an output parameter. For consistency across the codebase, consider changing result to a pointer (StreamInfoResult* result).


2006-2007: Typographical error in method name

Similarly, the method EnableDymayticOptions has a typo. It should be EnableDynamicOptions.


1988-1989: ⚠️ Potential issue

Typographical error in method name

The method EnableDymayticOptions seems to have a typo in its name. It should likely be EnableDynamicOptions.

Correct the method name to improve code clarity and prevent confusion.

-Status Storage::EnableDymayticOptions(const OptionType& option_type, const std::string& db_type,
+Status Storage::EnableDynamicOptions(const OptionType& option_type, const std::string& db_type,

Ensure that all calls to this method are updated accordingly.

Likely invalid or redundant comment.

Comment on lines 543 to 546
// 参数可能是 -6 需要ttl
std::unique_ptr<Cmd> ehmsetexptr = std::make_unique<PKHMSetexCmd>(
kCmdNamePKHMSetex, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHMSetex, std::move(ehmsetexptr)));
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

PKHMSetex command added with a TODO comment

The PKHMSetex command has been implemented, but there's a TODO comment in Chinese about the number of parameters.

Translate the TODO comment to English and clarify the parameter count issue. Consider adding a comment explaining the expected parameter structure for this command.

Comment on lines +502 to +505
////Ehttl
std::unique_ptr<Cmd> ehttlptr = std::make_unique<PKHTTLCmd>(
kCmdNamePKHTTL, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHTTL, std::move(ehttlptr)));
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

PKHTTLCmd command added with incorrect flags

The PKHTTLCmd has been added, but it's currently set with write flags. This command is typically used to get the time-to-live for a key, which is a read operation.

Change the flags for PKHTTLCmd to use read flags instead of write flags. Apply this diff:

-  std::unique_ptr<Cmd> ehttlptr = std::make_unique<PKHTTLCmd>(
-      kCmdNamePKHTTL, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
+  std::unique_ptr<Cmd> ehttlptr = std::make_unique<PKHTTLCmd>(
+      kCmdNamePKHTTL, -4, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
////Ehttl
std::unique_ptr<Cmd> ehttlptr = std::make_unique<PKHTTLCmd>(
kCmdNamePKHTTL, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHTTL, std::move(ehttlptr)));
////Ehttl
std::unique_ptr<Cmd> ehttlptr = std::make_unique<PKHTTLCmd>(
kCmdNamePKHTTL, -4, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHTTL, std::move(ehttlptr)));

Comment on lines +167 to +168
// pika hash CF
column_families.emplace_back("pika_hash_data_cf", pika_hash_data_cf_ops);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Update Column Family Indices After Adding New Column Family

With the addition of 'pika_hash_data_cf' to the column_families vector, the indices of existing column families in the handles_ array may have shifted. Please ensure that all references to handles_ use the correct indices corresponding to the updated order of column families to prevent out-of-bounds errors or misreferenced handles.

Consider updating the enumeration or constants that represent the indices of each column family in handles_.

Comment on lines +493 to +494
db_->rocksdb::DB::GetMapProperty(rocksdb::DB::Properties::kCFStats, &mapvalues);
mapToString(mapvalues, prefix);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Correct Method Invocation of 'GetMapProperty'

At line 493, the method call to GetMapProperty incorrectly qualifies the method with rocksdb::DB::. Since db_ is a pointer to rocksdb::DB, the correct invocation should be:

db_->GetMapProperty(rocksdb::DB::Properties::kCFStats, &mapvalues);

Apply this diff to fix the method call:

-  db_->rocksdb::DB::GetMapProperty(rocksdb::DB::Properties::kCFStats, &mapvalues);
+  db_->GetMapProperty(rocksdb::DB::Properties::kCFStats, &mapvalues);

Comment on lines +94 to +95
hash_data_cf_ops.compaction_filter_factory =
std::make_shared<HashesDataFilterFactory>(&db_, &handles_, DataType::kHashes);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Confirm Correct DataType for HashesDataFilterFactory

The HashesDataFilterFactory is initialized with DataType::kHashes. Since a new hash data type is being introduced, please verify that this is the intended DataType for pika_hash_data_cf_ops. If DataType::kPKHashes (or an equivalent) exists for the new PKHash commands, it should be used instead to ensure the compaction filter operates correctly on the new data type.

Apply this diff if a new DataType is defined for PKHash:

 hash_data_cf_ops.compaction_filter_factory =
-    std::make_shared<HashesDataFilterFactory>(&db_, &handles_, DataType::kHashes);
+    std::make_shared<HashesDataFilterFactory>(&db_, &handles_, DataType::kPKHashes);

Committable suggestion was skipped due to low confidence.

Comment on lines 77 to 88
// 1. 判断类型是否匹配和key是否过期【这里通过meta key来实现过期功能】。
if (s.ok() && !ExpectedMetaValue(DataType::kPKHashes, meta_value)) {
if (ExpectedStale(meta_value)) {
s = Status::NotFound();
} else {
return Status::InvalidArgument("WRONGTYPE, key: " + key.ToString() +
", expect type: " + DataTypeStrings[static_cast<int>(DataType::kPKHashes)] +
", get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]);
}
}
// 2. 健存在,并且没有过期,则更新。
if (s.ok()) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Translate code comments to English for consistency

Several code comments are written in Chinese (e.g., lines 77-81, 87-88, 136, 480-481). To maintain consistency and enhance readability for all contributors, please translate these comments into English.

Also applies to: 136-136, 480-481

Comment on lines +507 to +518
if (s.ok()) {
*ret = 1;
if (s.ok()) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Remove redundant if (s.ok()) check

Within the if (s.ok()) block starting at line 507, there is a redundant check of if (s.ok()) at line 509. Since the status s has not changed between these checks, the inner if (s.ok()) can be removed to simplify the code.

Apply this diff to remove the redundant check:

s = db_->Get(default_read_options_, handles_[kPKHashDataCF], hashes_data_key.Encode(), &data_value);
if (s.ok()) {
  *ret = 1;
-  if (s.ok()) {
    PKHashDataValue ehashes_value(value);
    ehashes_value.SetRelativeTimestamp(ttl);
    batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode());
    statistic++;
-  } else if (s.IsNotFound()) {
+} else if (s.IsNotFound()) {
    // Handle the not found case

Committable suggestion was skipped due to low confidence.

Comment on lines +520 to +533
Status Storage::PKHSetex(const Slice& key, const Slice& field, const Slice& value, int32_t ttl, int32_t* ret) {
auto& inst = GetDBInstance(key);
return inst->PKHSetex(key, field, value, ttl, ret);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Use int64_t for TTL parameter in PKHSetex

In the PKHSetex method, the TTL parameter ttl is currently an int32_t. Consider changing it to int64_t to support larger TTL values and maintain consistency across similar methods.

Apply this diff:

-Status Storage::PKHSetex(const Slice& key, const Slice& field, const Slice& value, int32_t ttl, int32_t* ret) {
+Status Storage::PKHSetex(const Slice& key, const Slice& field, const Slice& value, int64_t ttl, int32_t* ret) {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Status Storage::PKHSetex(const Slice& key, const Slice& field, const Slice& value, int32_t ttl, int32_t* ret) {
auto& inst = GetDBInstance(key);
return inst->PKHSetex(key, field, value, ttl, ret);
}
Status Storage::PKHSetex(const Slice& key, const Slice& field, const Slice& value, int64_t ttl, int32_t* ret) {
auto& inst = GetDBInstance(key);
return inst->PKHSetex(key, field, value, ttl, ret);
}

Comment on lines +1137 to +1136
std::for_each(value_to_dest.begin(), value_to_dest.end(),
[&score_members](auto kv) { score_members.emplace_back(kv.second, kv.first); });
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Ensure compatibility with C++ standards when using generic lambdas

The lambda function in std::for_each uses auto in its parameter list. This feature requires C++14 or newer. If the codebase targets C++11, this will cause compilation errors.

Specify the parameter type explicitly for C++11 compatibility:

-std::for_each(value_to_dest.begin(), value_to_dest.end(),
-              [&score_members](auto kv) { score_members.emplace_back(kv.second, kv.first); });
+std::for_each(value_to_dest.begin(), value_to_dest.end(),
+              [&score_members](const std::pair<std::string, double>& kv) { score_members.emplace_back(kv.second, kv.first); });

This change ensures that the code remains compatible with earlier C++ standards.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
std::for_each(value_to_dest.begin(), value_to_dest.end(),
[&score_members](auto kv) { score_members.emplace_back(kv.second, kv.first); });
std::for_each(value_to_dest.begin(), value_to_dest.end(),
[&score_members](const std::pair<std::string, double>& kv) { score_members.emplace_back(kv.second, kv.first); });

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 32

🧹 Outside diff range and nitpick comments (1)
src/storage/src/redis.h (1)

254-294: Approve new PK Hash Commands, suggest documentation

The new PK Hash Commands methods are a welcome addition to the Redis class. They follow a consistent naming convention and provide a comprehensive set of operations for PK Hash data structures.

Consider adding documentation comments for these new methods to explain their purpose, parameters, and return values. This will help other developers understand and use the new functionality correctly.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 378f5a3 and 9d208a6.

📒 Files selected for processing (19)
  • include/acl.h (1 hunks)
  • include/pika_command.h (5 hunks)
  • include/pika_pkhash.h (1 hunks)
  • src/pika_client_conn.cc (1 hunks)
  • src/pika_command.cc (10 hunks)
  • src/pika_pkhash.cc (1 hunks)
  • src/storage/include/storage/storage.h (18 hunks)
  • src/storage/include/storage/storage_define.h (2 hunks)
  • src/storage/src/base_filter.h (6 hunks)
  • src/storage/src/base_value_format.h (5 hunks)
  • src/storage/src/pkhash_data_value_format.h (1 hunks)
  • src/storage/src/redis.cc (12 hunks)
  • src/storage/src/redis.h (17 hunks)
  • src/storage/src/redis_hashes.cc (35 hunks)
  • src/storage/src/redis_pkhashes.cc (1 hunks)
  • src/storage/src/redis_strings.cc (39 hunks)
  • src/storage/src/storage.cc (34 hunks)
  • src/storage/tests/hashes_test.cc (3 hunks)
  • src/storage/tests/pkhashes_test.cc (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • src/storage/tests/hashes_test.cc
🚧 Files skipped from review as they are similar to previous changes (14)
  • include/acl.h
  • include/pika_command.h
  • include/pika_pkhash.h
  • src/pika_client_conn.cc
  • src/storage/include/storage/storage.h
  • src/storage/include/storage/storage_define.h
  • src/storage/src/base_filter.h
  • src/storage/src/base_value_format.h
  • src/storage/src/pkhash_data_value_format.h
  • src/storage/src/redis.cc
  • src/storage/src/redis_hashes.cc
  • src/storage/src/redis_strings.cc
  • src/storage/src/storage.cc
  • src/storage/tests/pkhashes_test.cc
🧰 Additional context used
🪛 cppcheck
src/storage/src/redis_pkhashes.cc

[performance] 565-565: Searching before insertion is not necessary.

(stlFindInsert)


[performance] 774-774: Searching before insertion is not necessary.

(stlFindInsert)


[performance] 857-857: Searching before insertion is not necessary.

(stlFindInsert)

🔇 Additional comments (12)
src/pika_pkhash.cc (5)

28-33: 🛠️ Refactor suggestion

Remove unnecessary iterator increments

The iterator increments are unnecessary and don't affect the parsing logic.

Apply this diff to remove the unnecessary increments:

-iter++;
-iter++;
-iter++;
-iter++;
-iter++;

Likely invalid or redundant comment.


37-38: ⚠️ Potential issue

Potential error in assigning fields

The assignment of fields may be incorrect due to the unnecessary iterator increments.

Apply this diff to correctly assign fields:

-fields_.assign(iter, argv_.end());
+fields_.assign(argv_.begin() + 5, argv_.end());

Likely invalid or redundant comment.


33-35: ⚠️ Potential issue

Fix incorrect parsing of numfields value

Similar to the TTL parsing, the condition for parsing numfields is incorrect.

Apply this diff to fix the parsing:

-if (pstd::string2int(argv_[4].data(), argv_[4].size(), &numfields_) == 0) {
+if (pstd::string2int(argv_[4].data(), argv_[4].size(), &numfields_) != 1) {

Likely invalid or redundant comment.


458-468: 🛠️ Refactor suggestion

Simplify error handling in PKHMSetCmd::Do()

The error handling can be simplified by combining the ok and IsInvalidArgument cases.

Apply this diff to simplify the error handling:

 void PKHMSetCmd::Do() {
   s_ = db_->storage()->HMSet(key_, fvs_);
-  if (s_.ok()) {
+  if (s_.ok() || s_.IsInvalidArgument()) {
     res_.SetRes(CmdRes::kOk);
     AddSlotKey("h", key_, db_);
-  } else if (s_.IsInvalidArgument()) {
-    res_.SetRes(CmdRes::kMultiKey);
   } else {
     res_.SetRes(CmdRes::kErrOther, s_.ToString());
   }
 }

Likely invalid or redundant comment.


252-262: 🛠️ Refactor suggestion

Simplify error handling in PKHSetCmd::Do()

The error handling can be simplified by combining the ok and IsInvalidArgument cases.

Apply this diff to simplify the error handling:

 void PKHSetCmd::Do() {
   int32_t ret = 0;
   s_ = db_->storage()->PKHSet(key_, field_, value_, &ret);
-  if (s_.ok()) {
+  if (s_.ok() || s_.IsInvalidArgument()) {
     res_.AppendContent(":" + std::to_string(ret));
     AddSlotKey("h", key_, db_);
-  } else if (s_.IsInvalidArgument()) {
-    res_.SetRes(CmdRes::kMultiKey);
   } else {
     res_.SetRes(CmdRes::kErrOther, s_.ToString());
   }
 }

Likely invalid or redundant comment.

src/storage/src/redis.h (2)

17-19: LGTM: New includes and GetPKHashCFHandles method added

The new include statements and the GetPKHashCFHandles method are appropriate additions for the new PK Hash functionality. The method follows the established pattern for other data types in the class.

Also applies to: 248-250


468-469: Verify complete integration of PK Hash functionality

The ExpectedStale method has been correctly updated to include PKHashes. However, it's important to ensure that all relevant parts of the codebase have been updated to fully support the new PK Hash operations.

Please run the following script to check for any other occurrences of "Hash" or "Hashes" that might need to be updated:

Review the results to determine if any other parts of the code need to be updated to fully support PK Hash operations.

src/storage/src/redis_pkhashes.cc (5)

477-478: Translate code comments to English for consistency

The code comment at lines 477-478 is in Chinese:

// 1. 判断类型是否匹配和key是否过期。

To maintain consistency and readability for all contributors, please translate this comment into English.


562-568: Eliminate redundant check before inserting into field_set

The explicit check field_set.find(field) == field_set.end() before insertion is unnecessary. You can rely on the result of field_set.insert(field) to determine if the field was newly inserted, which optimizes performance.

🧰 Tools
🪛 cppcheck

[performance] 565-565: Searching before insertion is not necessary.

(stlFindInsert)


771-777: Eliminate redundant check before inserting into fields

Similarly, in this loop, you can remove the explicit search before insertion into fields. Using the result of fields.insert(field) enhances efficiency.

🧰 Tools
🪛 cppcheck

[performance] 774-774: Searching before insertion is not necessary.

(stlFindInsert)


854-860: Remove unnecessary check before inserting into fields

As with the previous instances, eliminate the redundant check before inserting into fields to optimize the field filtering process.

🧰 Tools
🪛 cppcheck

[performance] 857-857: Searching before insertion is not necessary.

(stlFindInsert)


680-765: Address the unused ttl parameter in PKHIncrby

The ttl parameter in the PKHIncrby function is not used. If TTL functionality is intended for this operation, please implement the usage of ttl. Otherwise, consider removing the parameter to avoid confusion.

Comment on lines +105 to +109
iter++;
iter++;
iter++;
iter++;

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Remove unnecessary iterator increments

The iterator increments are unnecessary and don't affect the parsing logic.

Apply this diff to remove the unnecessary increments:

-iter++;
-iter++;
-iter++;
-iter++;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
iter++;
iter++;
iter++;
iter++;

Comment on lines +68 to +73
iter++;
iter++;
iter++;
iter++;
iter++;

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Remove unnecessary iterator increments

The iterator increments are unnecessary and don't affect the parsing logic.

Apply this diff to remove the unnecessary increments:

-iter++;
-iter++;
-iter++;
-iter++;
-iter++;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
iter++;
iter++;
iter++;
iter++;
iter++;

Comment on lines +140 to +144
iter++;
iter++;
iter++;
iter++;
if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) == 0) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Remove unnecessary iterator increments

The iterator increments are unnecessary and don't affect the parsing logic.

Apply this diff to remove the unnecessary increments:

-iter++;
-iter++;
-iter++;
-iter++;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
iter++;
iter++;
iter++;
iter++;
if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) == 0) {
if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) == 0) {

Comment on lines +174 to +178
iter++;
iter++;
iter++;
iter++;
if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) == 0) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Remove unnecessary iterator increments

The iterator increments are unnecessary and don't affect the parsing logic.

Apply this diff to remove the unnecessary increments:

-iter++;
-iter++;
-iter++;
-iter++;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
iter++;
iter++;
iter++;
iter++;
if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) == 0) {
if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) == 0) {

Comment on lines +178 to +181
if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) == 0) {
res_.SetRes(CmdRes::kInvalidInt);
return;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix incorrect parsing of numfields value

The condition for parsing numfields is incorrect.

Apply this diff to fix the parsing:

-if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) == 0) {
+if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) != 1) {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) == 0) {
res_.SetRes(CmdRes::kInvalidInt);
return;
}
if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) != 1) {
res_.SetRes(CmdRes::kInvalidInt);
return;
}

Comment on lines +551 to +555
// TODO(DDD) 为啥vals是慢的命令。
// TODO(DDD) 这些标志位都是啥意思。
std::unique_ptr<Cmd> ehvalsptr = std::make_unique<PKHValsCmd>(
kCmdNamePKHVals, 2, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsSlow);
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHVals, std::move(ehvalsptr)));
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Address TODO comments and clarify command performance

There are two TODO comments in Chinese that need to be addressed:

  1. "Why is vals a slow command?"
  2. "What do these flag bits mean?"

Please translate these comments to English and provide explanations or necessary changes based on the answers to these questions. Additionally, verify whether PKHValsCmd should indeed be marked with kCmdFlagsSlow. If the PKHVals operation is time-consuming due to data size or processing complexity, marking it as slow may be appropriate.

Comment on lines +485 to +563
// Pika Expire Hash
std::unique_ptr<Cmd> ehsetptr = std::make_unique<PKHSetCmd>(
kCmdNamePKHSet, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHSet, std::move(ehsetptr)));

////Ehexpire
std::unique_ptr<Cmd> ehexpireptr = std::make_unique<PKHExpireCmd>(
kCmdNamePKHExpire, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHExpire, std::move(ehexpireptr)));
////Ehexpireat
std::unique_ptr<Cmd> ehexpireatptr = std::make_unique<PKHExpireatCmd>(
kCmdNamePKHExpireat, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHExpireat, std::move(ehexpireatptr)));
////Ehexpiretime
std::unique_ptr<Cmd> ehexpiretimeptr = std::make_unique<PKHExpiretimeCmd>(
kCmdNamePKHExpiretime, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHExpiretime, std::move(ehexpiretimeptr)));
////Ehttl
std::unique_ptr<Cmd> ehttlptr = std::make_unique<PKHTTLCmd>(
kCmdNamePKHTTL, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHTTL, std::move(ehttlptr)));

////Ehpersist
std::unique_ptr<Cmd> ehpersistptr = std::make_unique<PKHPersistCmd>(
kCmdNamePKHPersist, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHPersist, std::move(ehpersistptr)));
////Ehget
std::unique_ptr<Cmd> ehgetptr = std::make_unique<PKHGetCmd>(
kCmdNamePKHGet, 3, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHGet, std::move(ehgetptr)));

std::unique_ptr<Cmd> ehsetexptr = std::make_unique<PKHSetexCmd>(
kCmdNamePKHSetex, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHSetex, std::move(ehsetexptr)));

std::unique_ptr<Cmd> ehexistsptr = std::make_unique<PKHExistsCmd>(
kCmdNamePKHExists, 3, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHExists, std::move(ehexistsptr)));

std::unique_ptr<Cmd> ehdelptr = std::make_unique<PKHDelCmd>(
kCmdNamePKHDel, -3, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHDel, std::move(ehdelptr)));

std::unique_ptr<Cmd> ehlenptr = std::make_unique<PKHLenCmd>(
kCmdNamePKHLen, 2, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHLen, std::move(ehlenptr)));

std::unique_ptr<Cmd> ehstrlenptr = std::make_unique<PKHStrLenCmd>(
kCmdNamePKHStrlen, 3, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHStrlen, std::move(ehstrlenptr)));

std::unique_ptr<Cmd> ehincrbyptr = std::make_unique<PKHIncrbyCmd>(
kCmdNamePKHIncrby, 4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHIncrby, std::move(ehincrbyptr)));

std::unique_ptr<Cmd> ehmsetptr = std::make_unique<PKHMSetCmd>(
kCmdNamePKHMSet, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHMSet, std::move(ehmsetptr)));

std::unique_ptr<Cmd> ehmgetptr = std::make_unique<PKHMGetCmd>(
kCmdNamePKHMGet, -3, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHMGet, std::move(ehmgetptr)));

std::unique_ptr<Cmd> ehkeysptr = std::make_unique<PKHKeysCmd>(
kCmdNamePKHKeys, 2, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHKeys, std::move(ehkeysptr)));
// TODO(DDD) 为啥vals是慢的命令。
// TODO(DDD) 这些标志位都是啥意思。
std::unique_ptr<Cmd> ehvalsptr = std::make_unique<PKHValsCmd>(
kCmdNamePKHVals, 2, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsSlow);
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHVals, std::move(ehvalsptr)));

std::unique_ptr<Cmd> ehgetallptr = std::make_unique<PKHGetAllCmd>(
kCmdNamePKHGetall, 2, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsSlow);
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHGetall, std::move(ehgetallptr)));

std::unique_ptr<Cmd> ehscanptr = std::make_unique<PKHScanCmd>(
kCmdNamePKHScan, -3, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsSlow);
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHScan, std::move(ehscanptr)));
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Review command flags for consistency

Please review the command flags for all PKHash commands to ensure consistency. Some observations:

  1. Most read operations use kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast.
  2. Most write operations use kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast.
  3. Some commands (e.g., PKHVals, PKHGetall, PKHScan) are marked as slow, while others with potentially similar complexity are marked as fast.

Ensure that the performance characteristics of each command are accurately reflected in their flags. If certain commands are indeed slower due to their implementation or typical use cases, consider adding comments explaining why they are marked as slow.

Comment on lines +503 to +505
std::unique_ptr<Cmd> ehttlptr = std::make_unique<PKHTTLCmd>(
kCmdNamePKHTTL, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHTTL, std::move(ehttlptr)));
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Correct command flags for PKHTTLCmd

The PKHTTL command is intended to return the time-to-live for a hash key, which is a read operation. However, it is currently flagged as a write command. Please change the flags to use kCmdFlagsRead to accurately reflect its functionality.

-  std::unique_ptr<Cmd> ehttlptr = std::make_unique<PKHTTLCmd>(
-      kCmdNamePKHTTL, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
+  std::unique_ptr<Cmd> ehttlptr = std::make_unique<PKHTTLCmd>(
+      kCmdNamePKHTTL, -4, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
std::unique_ptr<Cmd> ehttlptr = std::make_unique<PKHTTLCmd>(
kCmdNamePKHTTL, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHTTL, std::move(ehttlptr)));
std::unique_ptr<Cmd> ehttlptr = std::make_unique<PKHTTLCmd>(
kCmdNamePKHTTL, -4, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHTTL, std::move(ehttlptr)));

Comment on lines 155 to 159
if (ttl <= 0) {
return Status::InvalidArgument("invalid expire time, must be >= 0");
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Correct the error message regarding TTL value

The condition checks if ttl <= 0, but the error message states "must be >= 0", which includes zero. Since a TTL of zero is invalid, the error message should read "must be > 0" to be accurate.

Apply this diff to correct the error message:

if (ttl <= 0) {
-  return Status::InvalidArgument("invalid expire time, must be >= 0");
+  return Status::InvalidArgument("invalid expire time, must be > 0");
}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (ttl <= 0) {
return Status::InvalidArgument("invalid expire time, must be >= 0");
}
if (ttl <= 0) {
return Status::InvalidArgument("invalid expire time, must be > 0");
}

Comment on lines +217 to +224
if (timestamp <= 0) {
rets->assign(numfields, 2);
return Status::InvalidArgument("invalid expire time, must be >= 0");
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Correct the error message regarding expiration timestamp

The condition checks if timestamp <= 0, but the error message says "must be >= 0", which includes zero. Since a timestamp of zero is invalid, the error message should state "must be > 0" to be accurate.

Apply this diff to correct the error message:

if (timestamp <= 0) {
  rets->assign(numfields, 2);
-  return Status::InvalidArgument("invalid expire time, must be >= 0");
+  return Status::InvalidArgument("invalid expire time, must be > 0");
}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (timestamp <= 0) {
rets->assign(numfields, 2);
return Status::InvalidArgument("invalid expire time, must be >= 0");
}
if (timestamp <= 0) {
rets->assign(numfields, 2);
return Status::InvalidArgument("invalid expire time, must be > 0");
}

@@ -0,0 +1,469 @@
// Copyright (c) 2015-present, Qihoo, Inc. All rights reserved.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

改2024

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 34

🧹 Outside diff range and nitpick comments (16)
src/storage/include/storage/storage_define.h (1)

47-47: LGTM. Consider adding documentation for the new enum value.

The addition of kPKHashDataCF = 7 to the ColumnFamilyIndex enum is correct and aligns with the PR objectives. This will support the new PKHash functionality.

Consider adding a brief comment explaining the purpose of this new column family, similar to the existing comment for kMetaCF.

src/storage/src/base_value_format.h (2)

21-31: LGTM! Consider adding a comment for kAll.

The additions to the DataType enum are well-structured and align with the PR objectives. The new kPKHashes type supports the introduction of PKHash commands, and kAll is a logical addition.

Consider adding a brief comment for the kAll enum value to clarify its purpose, e.g.:

kAll = 8,  // Represents all data types, used for operations that apply to any type

Line range hint 57-61: LGTM! Consider adding a noexcept specifier.

The changes to the InternalValue destructor are good improvements:

  1. Making it virtual allows for proper cleanup in derived classes.
  2. The added memory management prevents potential memory leaks.

Consider adding the noexcept specifier to the destructor:

virtual ~InternalValue() noexcept {
  // ... existing code ...
}

This ensures that the destructor doesn't throw exceptions, which is a best practice for destructors.

include/pika_pkhash.h (1)

1-469: Overall, well-structured and consistent implementation

The file implements a comprehensive set of command classes for PKHash operations, following a consistent structure and design pattern. The code demonstrates good adherence to object-oriented principles and separation of concerns. Minor issues with uninitialized member variables have been noted in previous comments.

Consider adding comments to explain the purpose of each command class and any complex logic within the methods. This would enhance code readability and maintainability.

include/pika_command.h (2)

141-158: LGTM! Consider grouping PKHash constants.

The addition of PKHash command constants is well-structured and follows the existing naming conventions. This will facilitate the implementation of PKHash functionality throughout the codebase.

For improved readability and consistency with other command groups, consider adding a comment line above these constants, similar to how other command groups are separated (e.g., "// Hash", "// List", etc.).


141-158: Summary: PKHash support added consistently.

The changes in this file lay the groundwork for PKHash functionality by adding necessary constants and a flag. These additions are consistent with the existing code structure and naming conventions.

As you continue implementing PKHash functionality:

  1. Ensure that all relevant command handlers and processors are updated to use these new constants and the kCmdFlagsPKHash flag.
  2. Update any command tables or registries to include the new PKHash commands.
  3. Consider adding unit tests specifically for PKHash command parsing and flag checking to ensure robustness of the new functionality.

Also applies to: 312-313

src/storage/src/redis_hashes.cc (3)

90-92: Improved error message formatting.

The multi-line format enhances readability. Consider using a string literal with embedded newlines for even better readability:

-return Status::InvalidArgument("WRONGTYPE, key: " + key.ToString() +
-                               ", expect type: " + DataTypeStrings[static_cast<int>(DataType::kHashes)] +
-                               ", get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]);
+return Status::InvalidArgument(
+    "WRONGTYPE, key: " + key.ToString() + "\n"
+    "expect type: " + DataTypeStrings[static_cast<int>(DataType::kHashes)] + "\n"
+    "get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]);

153-155: Consistent improvement in error message formatting across multiple functions.

The multi-line formatting has been consistently applied to error messages throughout the file, enhancing readability. Consider applying the string literal with embedded newlines suggestion to all these instances for further improvement and consistency.

Also applies to: 192-194, 234-236, 289-291, 380-382, 463-465, 504-506


1153-1155: Further consistent improvement in error message formatting.

The multi-line formatting for error messages has been consistently applied to these additional instances. As suggested earlier, consider using string literals with embedded newlines for even better readability and consistency across all error messages in the file.

Also applies to: 1192-1194, 1228-1230, 1266-1268, 1304-1306

src/storage/src/redis.cc (1)

42-42: Remove unused code related to 'env_'

The member env_ is commented out in the constructor (// env_ = rocksdb::Env::Instance();) and in the destructor (// delete env_;). If env_ is no longer needed, consider removing both the commented-out initialization and cleanup to keep the codebase clean.

src/storage/src/redis.h (1)

537-538: Address the TODO: Separate environment for each RocksDB instance

The TODO comment suggests separating the environment for each RocksDB instance. Implementing this can enhance resource isolation and stability.

Would you like assistance in designing or implementing this change?

src/storage/include/storage/storage.h (1)

Line range hint 94-95: Fix the typo: 'invaild_keys' should be 'invalid_keys'

The member variable invaild_keys in the KeyInfo struct is misspelled. It should be invalid_keys. This typo occurs in the declaration and usage within the struct.

Apply this diff to correct the typo:

 struct KeyInfo {
   uint64_t keys = 0;
   uint64_t expires = 0;
   uint64_t avg_ttl = 0;
-  uint64_t invaild_keys = 0;
+  uint64_t invalid_keys = 0;

   KeyInfo() : keys(0), expires(0), avg_ttl(0), 
-              invaild_keys(0) {}
+              invalid_keys(0) {}

   KeyInfo(uint64_t k, uint64_t e, uint64_t a, uint64_t i)
-      : keys(k), expires(e), avg_ttl(a), invaild_keys(i) {}
+      : keys(k), expires(e), avg_ttl(a), invalid_keys(i) {}
src/storage/src/storage.cc (3)

Line range hint 199-211: Refactor Suggestion: Eliminate Code Duplication in MGet Functions

The Storage::MGet and Storage::MGetWithTTL functions contain similar loops iterating over the keys to retrieve values. To improve maintainability and reduce code duplication, consider extracting the common logic into a shared helper function that both methods can utilize.

Also applies to: 218-231


2016-2017: Typographical Error in Function Name EnableDymayticOptions

The function name EnableDymayticOptions appears to contain a typographical error. It should likely be EnableDynamicOptions for clarity and consistency with naming conventions.

Apply this diff to correct the typo:

-Status Storage::EnableDymayticOptions(const OptionType& option_type, const std::string& db_type,
+Status Storage::EnableDynamicOptions(const OptionType& option_type, const std::string& db_type,

1991-1991: Nitpick: Add Space for Clarity in Log Message

The log message concatenates strings without spaces, which may reduce readability. Consider adding a space or delimiter for clarity.

Apply this diff:

-      LOG(WARNING) << "Invalid DB Index: " << index << "total: " << db_instance_num_;
+      LOG(WARNING) << "Invalid DB Index: " << index << ", total: " << db_instance_num_;
src/pika_command.cc (1)

966-966: Correct the grammatical error in the comment

The comment contains a grammatical error. It should be corrected for clarity.

Apply this diff:

-      Clear();  // Clear cmd, Derived class can has own implement
+      Clear();  // Clear command; derived classes can have their own implementations
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 9d208a6 and 25c41e8.

📒 Files selected for processing (19)
  • include/acl.h (1 hunks)
  • include/pika_command.h (5 hunks)
  • include/pika_pkhash.h (1 hunks)
  • src/pika_client_conn.cc (1 hunks)
  • src/pika_command.cc (10 hunks)
  • src/pika_pkhash.cc (1 hunks)
  • src/storage/include/storage/storage.h (13 hunks)
  • src/storage/include/storage/storage_define.h (2 hunks)
  • src/storage/src/base_filter.h (6 hunks)
  • src/storage/src/base_value_format.h (5 hunks)
  • src/storage/src/pkhash_data_value_format.h (1 hunks)
  • src/storage/src/redis.cc (12 hunks)
  • src/storage/src/redis.h (14 hunks)
  • src/storage/src/redis_hashes.cc (29 hunks)
  • src/storage/src/redis_pkhashes.cc (1 hunks)
  • src/storage/src/redis_strings.cc (39 hunks)
  • src/storage/src/storage.cc (28 hunks)
  • src/storage/tests/hashes_test.cc (3 hunks)
  • src/storage/tests/pkhashes_test.cc (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • include/acl.h
  • src/pika_client_conn.cc
🧰 Additional context used
🪛 cppcheck
src/storage/src/redis_pkhashes.cc

[performance] 565-565: Searching before insertion is not necessary.

(stlFindInsert)


[performance] 774-774: Searching before insertion is not necessary.

(stlFindInsert)


[performance] 857-857: Searching before insertion is not necessary.

(stlFindInsert)

src/storage/src/redis_strings.cc

[error] 917-917: Unmatched '{'. Configuration

(syntaxError)

🔇 Additional comments (45)
src/storage/include/storage/storage_define.h (1)

Line range hint 1-135: Summary of changes in storage_define.h

  1. Added kPKHashDataCF = 7 to the ColumnFamilyIndex enum, which supports the new PKHash functionality introduced in this PR.
  2. Minor formatting improvements in the SeekUserkeyDelim function.

These changes are consistent with the PR objectives and related changes in other files. They lay the groundwork for the new PKHash commands and data management.

src/storage/src/base_value_format.h (1)

35-35: LGTM! DataTypeStrings array correctly updated.

The DataTypeStrings array has been properly updated to include the new data types. The additions of "pkhash", "none", and "all" align correctly with the enum changes.

include/pika_pkhash.h (1)

37-37: Verify the empty Clear() method

The Clear() method is currently empty. Please confirm if this is intentional or if it should be implemented to reset the internal state of the command object.

src/storage/src/redis_hashes.cc (5)

14-15: Appropriate header inclusions added.

The addition of these headers likely provides necessary definitions for key and value formatting used in hash operations.


855-856: Improved function signature formatting.

Splitting the long function signature across multiple lines enhances readability. This change is in line with good coding practices for handling long lines of code.


878-880: Consistent improvement in function signature formatting.

Long function signatures have been consistently split across multiple lines throughout the file. This change enhances code readability and is in line with good coding practices for handling long lines of code.

Also applies to: 937-938, 998-999, 1068-1069


1350-1351: Improved code formatting for long line.

Splitting the long line of code across multiple lines enhances readability. This change is in line with good coding practices for handling long lines of code.


Line range hint 1-1371: Overall code readability improvements.

The changes in this file consistently enhance code readability through improved formatting of error messages, function signatures, and long lines of code. While no functional changes were made, these formatting improvements align with good coding practices and make the code easier to read and maintain. Consider applying the suggested further improvements for even better consistency and readability across the file.

src/storage/tests/hashes_test.cc (3)

357-357: Verify the removal of assertion for HIncrbyfloat test

The assertion for the expected value of HIncrbyfloat operation has been commented out. This reduces the strictness of the test case.

Could you please clarify the reasoning behind removing this assertion? If this is intentional, consider adding a comment explaining why the exact value is no longer being checked.

#!/bin/bash
# Check if there are any other changes to HIncrbyfloat tests or implementation
rg -A 5 "HIncrbyfloat"

Line range hint 357-391: Consider the impact of relaxed HIncrbyfloat testing

The removal of specific value assertions in both HIncrbyfloat test cases reduces the precision of these tests. While this might be intentional, it's important to ensure that the HIncrbyfloat functionality is still adequately tested.

Please review if there are alternative ways to test the HIncrbyfloat function that don't rely on exact value matching. For example, could we test for a range of acceptable values or other properties of the output? Additionally, ensure that these changes don't leave any critical behavior untested.

#!/bin/bash
# Check for any other tests related to HIncrbyfloat
rg "HIncrbyfloat" src/storage/tests/

391-391: Confirm the intention of commenting out HIncrbyfloat assertion

Similar to the previous change, another assertion for the HIncrbyfloat operation has been commented out.

Is this part of a larger change in how HIncrbyfloat is being tested? If so, it might be beneficial to update the test description or add comments explaining the new testing strategy.

src/storage/src/base_filter.h (6)

16-19: Inclusion of necessary header files

The added #include statements ensure that the required base key and value formats and debugging utilities are accessible within this file, improving code clarity and functionality.


58-58: Enhanced debug logging for stream meta type

The debug statement now includes the version information for stream meta types, which provides more detailed logging and aids in troubleshooting.


62-64: Corrected debug logging format for list meta type

The debug statement has been reformatted to ensure that the format specifiers align with the corresponding arguments, resulting in accurate and clear logging output.


113-113: Improved constructor parameter alignment in BaseDataFilter

The constructor parameters are now clearly aligned, enhancing readability and maintainability of the code.


156-158: Verify correct parsing and handling of PKHashes meta values

The inclusion of DataType::kPKHashes in the conditional ensures that PKHashes are handled similarly to other data types like Hashes, Sets, and ZSets. Please verify that ParsedBaseMetaValue properly parses PKHashes meta values and that TTL (Etime) and version management function correctly for this new data type.


227-228: Updated BaseDataFilterFactory constructor to include DataType parameter

By adding the enum DataType type parameter to the constructor, the factory can now create data filters specific to the data type, enhancing the modularity and reusability of the filter factory.

src/storage/src/redis.cc (4)

94-95: Confirm Correct DataType for HashesDataFilterFactory


102-111: Confirm Correct DataType for 'pika_hash_data_cf' Compaction Filter Factory


167-168: Update Column Family Indices After Adding New Column Family


493-494: Correct Method Invocation of 'GetMapProperty'

src/storage/src/redis.h (3)

160-161: Parameters in 'BitOp' method are consistent and well-defined

The updated BitOp method signature is consistent and aligns with the expected parameters.


260-300: Ensure unit tests are added for new PK Hash methods

The addition of new PK Hash methods (e.g., PKHExpire, PKHGet, PKHSet, etc.) enhances the Redis class functionality. Please ensure comprehensive unit tests are implemented to validate their correctness and prevent future regressions.

Would you like assistance in generating unit tests for these methods?


468-474: Update 'ExpectedStale' function to include 'kPKHashes'

Adding DataType::kPKHashes to the ExpectedStale function ensures that PK Hashes are properly checked for staleness. This update aligns with the new PK Hash functionality.

src/storage/src/redis_pkhashes.cc (8)

77-81: Translate code comments to English for consistency

Several code comments are written in Chinese (e.g., lines 77-81, 87-88, 136, 480-481). To maintain consistency and enhance readability for all contributors, please translate these comments into English.

Also applies to: 87-88, 136-136, 480-481


155-157: Correct the error message regarding TTL value

The condition checks if ttl <= 0, but the error message states "must be >= 0", which includes zero. Since a TTL of zero is invalid, the error message should read "must be > 0" to be accurate.


217-220: Correct the error message regarding expiration timestamp

Similarly, the condition checks if timestamp <= 0, but the error message should state "must be > 0" to accurately reflect the invalid input.


565-568: Eliminate redundant check before inserting into field_set

The explicit search using field_set.find(field) before insertion is unnecessary. You can rely on the result of the insert operation to determine if the field was newly inserted, improving performance.

🧰 Tools
🪛 cppcheck

[performance] 565-565: Searching before insertion is not necessary.

(stlFindInsert)


774-777: Eliminate redundant check before inserting into fields

Similarly, in this loop, you can remove the explicit search before insertion into fields to optimize the code.

🧰 Tools
🪛 cppcheck

[performance] 774-774: Searching before insertion is not necessary.

(stlFindInsert)


857-860: Eliminate redundant check before inserting into fields

This redundancy is also present here. Refactoring to remove the unnecessary check can enhance performance.

🧰 Tools
🪛 cppcheck

[performance] 857-857: Searching before insertion is not necessary.

(stlFindInsert)


680-765: Address unused ttl parameter in PKHIncrby

The ttl parameter in the PKHIncrby function is not used anywhere in the function. If applying a TTL is intended, please modify the function to use this parameter. Otherwise, consider removing it to avoid confusion.


111-113: Consider updating TTL when values are equal

In the PKHSet function, if the existing value is equal to the new value, the function returns early without updating the TTL or other metadata. If updating the TTL is desired even when the value hasn't changed, you should adjust the logic to handle this case.

src/storage/include/storage/storage.h (4)

178-178: Restrict access to the Storage() default constructor

The default constructor Storage() is intended for unit tests only. To prevent unintended use in production code, consider making it private or protected.


416-427: Remove redundant numfields parameter in PKH methods

The numfields parameter in methods like PKHExpire and PKHExpireat is redundant since the number of fields can be obtained from fields.size(). Removing this parameter will simplify the method signatures and prevent potential inconsistencies.

Apply the following changes:

For PKHExpire:

 Status PKHExpire(const Slice& key, int32_t ttl,
-                 int32_t numfields, const std::vector<std::string>& fields,
+                 const std::vector<std::string>& fields,
                  std::vector<int32_t>* rets);

For PKHExpireat:

 Status PKHExpireat(const Slice& key, int64_t timestamp,
-                   int32_t numfields, const std::vector<std::string>& fields,
+                   const std::vector<std::string>& fields,
                    std::vector<int32_t>* rets);

And similarly for other related methods.


1008-1008: Use a pointer for the output parameter in XInfo

To maintain consistency in the codebase, the output parameter result in XInfo should be passed as a pointer.

Apply this diff:

 Status XInfo(const Slice& key, StreamInfoResult& result);
+Status XInfo(const Slice& key, StreamInfoResult* result);

And adjust the implementation accordingly.


1153-1154: Correct the spelling of EnableDymayticOptions to EnableDynamicOptions

The method name EnableDymayticOptions contains a typographical error. Please correct it to EnableDynamicOptions to improve code readability and prevent confusion.

Apply this diff:

 Status EnableDymayticOptions(const OptionType& option_type, const std::string& db_type,
+Status EnableDynamicOptions(const OptionType& option_type, const std::string& db_type,
                              const std::unordered_map<std::string, std::string>& options);

Ensure all references to this method are updated accordingly.

src/storage/tests/pkhashes_test.cc (5)

27-33: Check the return status of db.Open in SetUp()

This issue was previously noted and still applies: the return status of db.Open(storage_options, path); is assigned to s, but there is no check to verify if the operation was successful. Ignoring the status may lead to unexpected behavior if the database fails to open.


95-103: Unused variable type_status and incorrect error checking in make_expired

This issue was previously noted and still applies: the variable type_status is declared but not populated. The condition !type_status[storage::DataType::kHashes].ok() is invalid because type_status is empty, leading to improper error handling in the make_expired function.


826-828: Incorrect error checking using uninitialized type_status

This issue was previously noted and still applies: the variable type_status is declared but not populated. The assertion ASSERT_TRUE(type_status[storage::DataType::kHashes].ok()); will not function correctly since type_status is empty, leading to invalid error checking.


521-524: Incorrect error checking using uninitialized type_status in PKHMSet test

This issue was previously noted and still applies: in the PKHMSet test, the variable type_status is declared but not used effectively. The condition ASSERT_TRUE(type_status[storage::DataType::kHashes].ok()); is invalid because type_status is empty and uninitialized.


386-405: PKHGetTest function is well-implemented

The PKHGetTest function correctly tests the PKHGet operation, including scenarios where the key or field does not exist. The assertions appropriately verify the expected outcomes.

src/storage/src/redis_strings.cc (1)

1499-1501: Verify the use of fmt::format in logging

The LOG(INFO) statement uses fmt::format, which may not be compatible with the logging macro if it doesn't support string formatting.

Please ensure that this usage is correct. If the logging macro doesn't support formatted strings directly, consider formatting the string before logging:

auto formatted_message = fmt::format("[key : {:<30}] [value : {:<30}] [timestamp : {:<10}] [version : {}] [survival_time : {}]",
                                     parsed_strings_key.Key().ToString(), parsed_strings_value.UserValue().ToString(),
                                     parsed_strings_value.Etime(), parsed_strings_value.Version(), survival_time);
LOG(INFO) << formatted_message;
src/storage/src/storage.cc (3)

304-308: Good Addition: Validate Arguments in BitOp Function

The added check correctly ensures that when performing the BITOP NOT operation, there is exactly one source key provided. This prevents invalid usage and potential errors.


1398-1398: Verify the Logic of the Condition in Scan Function

The condition within the if statement may not correctly determine when to stop scanning. Specifically, the comparison miter.Key().compare(prefix) <= 0 could lead to unintended results when prefix is empty or when keys precede the prefix lexicographically. Please verify that the logic accurately reflects the intended behavior.


2036-2042: Reminder: Include New Column Family Handles for PKHash

The addition of pk_hash_cfhds and their inclusion into the cfhds vector is correct. This ensures that EnableAutoCompaction considers the new PKHash column families.

Comment on lines +124 to +132
bool zero_ahead = false;
for (int i = 0; i < length; i++) {
if (ptr[i] == kNeedTransformCharacter && zero_ahead) {
return ptr + i + 1;
}
//TODO: handle invalid format
return ptr;
zero_ahead = ptr[i] == kNeedTransformCharacter;
}
// TODO: handle invalid format
return ptr;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

TODO Comment Lacks an Associated Issue

  • The TODO comment // TODO: handle invalid format in src/storage/include/storage/storage_define.h is not linked to any existing issue.
🔗 Analysis chain

Formatting looks good. Consider addressing the TODO comment.

The formatting changes improve readability and consistency with the rest of the codebase.

Regarding the TODO comment about handling invalid format:

  1. Can you provide more context on what constitutes an "invalid format" in this case?
  2. Is there a ticket or issue tracking this TODO item?

To help investigate this further, we can run the following script:

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for related TODOs or issues
echo "Searching for related TODOs:"
rg "TODO.*invalid format" --type cpp
echo "Searching for related issues:"
gh issue list --search "in:title invalid format"

Length of output: 303

constexpr int DataTypeNum = int(DataType::kNones);

constexpr char DataTypeTag[] = { 'k', 'h', 's', 'l', 'z', 'x', 'n', 'a'};
constexpr char* DataTypeStrings[] = { "string", "hash", "set", "list", "zset", "streams", "none", "all"};
constexpr char DataTypeTag[] = {'k', 'h', 's', 'l', 'z', 'x', 'n', 'a'};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Update DataTypeTag array to include new enum values.

The update to DataTypeNum is correct. However, the DataTypeTag array should be extended to include tags for the new enum values kPKHashes and kAll.

Please update the DataTypeTag array as follows:

constexpr char DataTypeTag[] = {'k', 'h', 's', 'l', 'z', 'x', 'p', 'n', 'a'};

Where 'p' represents kPKHashes and 'a' represents kAll.

Comment on lines 62 to 73
if (value_->size() >= kPKHashDataValueSuffixLength) {
user_value_ = rocksdb::Slice(value_->data(), value_->size() - kPKHashDataValueSuffixLength);
memcpy(reserve_, value_->data() + user_value_.size(), kSuffixReserveLength);
ctime_ = DecodeFixed64(value_->data() + user_value_.size() + kSuffixReserveLength);
etime_ = DecodeFixed64(value_->data() + user_value_.size() + kSuffixReserveLength + kTimestampLength);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Refactor duplicate code in constructors to improve maintainability

The constructors of ParsedPKHashDataValue contain duplicate code for parsing the value. Consider extracting the common parsing logic into a private method to enhance readability and maintainability.

Here's how you could refactor the code:

Add a private method to handle parsing:

private:
  void ParseValue(const char* data, size_t size) {
    if (size >= kPKHashDataValueSuffixLength) {
      user_value_ = rocksdb::Slice(data, size - kPKHashDataValueSuffixLength);
      memcpy(reserve_, data + user_value_.size(), kSuffixReserveLength);
      ctime_ = DecodeFixed64(data + user_value_.size() + kSuffixReserveLength);
      etime_ = DecodeFixed64(data + user_value_.size() + kSuffixReserveLength + kTimestampLength);
    }
  }

Modify the constructors to use the new method:

 explicit ParsedPKHashDataValue(std::string* value) : ParsedInternalValue(value) {
-  if (value_->size() >= kPKHashDataValueSuffixLength) {
-    user_value_ = rocksdb::Slice(value_->data(), value_->size() - kPKHashDataValueSuffixLength);
-    memcpy(reserve_, value_->data() + user_value_.size(), kSuffixReserveLength);
-    ctime_ = DecodeFixed64(value_->data() + user_value_.size() + kSuffixReserveLength);
-    etime_ = DecodeFixed64(value_->data() + user_value_.size() + kSuffixReserveLength + kTimestampLength);
-  }
+  ParseValue(value_->data(), value_->size());
 }

 explicit ParsedPKHashDataValue(const rocksdb::Slice& value) : ParsedInternalValue(value) {
-  if (value.size() >= kPKHashDataValueSuffixLength) {
-    user_value_ = rocksdb::Slice(value.data(), value.size() - kPKHashDataValueSuffixLength);
-    memcpy(reserve_, value.data() + user_value_.size(), kSuffixReserveLength);
-    ctime_ = DecodeFixed64(value.data() + user_value_.size() + kSuffixReserveLength);
-    etime_ = DecodeFixed64(value.data() + user_value_.size() + kSuffixReserveLength + kTimestampLength);
-  }
+  ParseValue(value.data(), value.size());
 }

Also applies to: 75-80

Comment on lines +121 to +132
const size_t kPKHashDataValueSuffixLength = kSuffixReserveLength + kTimestampLength * 2;
};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add static keyword to class constant to ensure correct compilation

Similarly, the member variable kPKHashDataValueSuffixLength should be declared as static const to prevent compilation errors. This change indicates that the value is constant and shared among all instances of the class.

Apply this diff to fix the issue:

 private:
-  const size_t kPKHashDataValueSuffixLength = kSuffixReserveLength + kTimestampLength * 2;
+  static const size_t kPKHashDataValueSuffixLength = kSuffixReserveLength + kTimestampLength * 2;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const size_t kPKHashDataValueSuffixLength = kSuffixReserveLength + kTimestampLength * 2;
};
static const size_t kPKHashDataValueSuffixLength = kSuffixReserveLength + kTimestampLength * 2;
};

Comment on lines +53 to +58
const size_t kDefaultValueSuffixLength = kSuffixReserveLength + kTimestampLength * 2;
};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add static keyword to class constant to ensure correct compilation

The member variable kDefaultValueSuffixLength should be declared as static const because it is a class-level constant. Non-static const members require initialization in the constructor, which is not provided here. Making it static ensures proper compilation and reflects that it is shared across all instances of the class.

Apply this diff to fix the issue:

 private:
-  const size_t kDefaultValueSuffixLength = kSuffixReserveLength + kTimestampLength * 2;
+  static const size_t kDefaultValueSuffixLength = kSuffixReserveLength + kTimestampLength * 2;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const size_t kDefaultValueSuffixLength = kSuffixReserveLength + kTimestampLength * 2;
};
static const size_t kDefaultValueSuffixLength = kSuffixReserveLength + kTimestampLength * 2;
};

Comment on lines 1576 to 1582
<<<<<<< HEAD
return HashesExpire(key, ttl_millsec, std::move(meta_value));
=======
return HashesExpire(key, ttl, std::move(meta_value));
case DataType::kPKHashes:
return PKHashesExpire(key, ttl, std::move(meta_value));
>>>>>>> 9d208a64 (feat: add pkhash tests and hash basic cmd)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Resolve merge conflicts in the Expire method

There is a merge conflict in the Expire method related to handling hash expirations, including PKHashes.

Please resolve the conflict:

<<<<<<< HEAD
            return HashesExpire(key, ttl_millsec, std::move(meta_value));
=======
            return HashesExpire(key, ttl, std::move(meta_value));
          case DataType::kPKHashes:
            return PKHashesExpire(key, ttl, std::move(meta_value));
>>>>>>> 9d208a64 (feat: add pkhash tests and hash basic cmd)

Integrate the PKHashesExpire case if it's part of the new functionality.

Committable suggestion was skipped due to low confidence.

@@ -1344,8 +1452,7 @@ Status Storage::PKScanRange(const DataType& data_type, const Slice& key_start, c
miter.Seek(temp);
}

while (miter.Valid() && limit > 0 &&
(end_no_limit || miter.Key().compare(key_end.ToString()) <= 0)) {
while (miter.Valid() && limit > 0 && (end_no_limit || miter.Key().compare(key_end.ToString()) <= 0)) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Potential Off-by-One Error in Loop Condition

In the PKScanRange function, the loop condition checks if miter.Key().compare(key_end.ToString()) <= 0. Ensure that this comparison correctly includes or excludes the key_end boundary as intended. Off-by-one errors could result in missing or extra keys in the scan results.

Please verify and adjust the condition if necessary.

@@ -1391,8 +1498,7 @@
miter.SeekForPrev(base_key_start.Encode().ToString());
}

while (miter.Valid() && limit > 0 &&
(end_no_limit || miter.Key().compare(key_end.ToString()) >= 0)) {
while (miter.Valid() && limit > 0 && (end_no_limit || miter.Key().compare(key_end.ToString()) >= 0)) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Potential Off-by-One Error in Reverse Scan Loop Condition

In the PKRScanRange function, the loop condition uses miter.Key().compare(key_end.ToString()) >= 0. Verify that this condition correctly handles the inclusion or exclusion of the key_end boundary in reverse order. Off-by-one errors here could affect the accuracy of the reverse scan results.

Please review and adjust the condition if required.

Comment on lines +821 to 823
std::copy(std::move_iterator<Iter>(vec.begin()), std::move_iterator<Iter>(vec.end()),
std::insert_iterator<Uset>(member_set, member_set.begin()));
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Potential Issue with Moving Elements into unordered_set

Using std::move_iterator with std::copy to insert elements into an unordered_set may lead to undefined behavior. Moved-from elements may not remain valid for hashing and equality checks required by the set. Consider copying the elements instead of moving them.

Apply this diff to fix the issue:

-      std::copy(std::move_iterator<Iter>(vec.begin()), std::move_iterator<Iter>(vec.end()),
+      std::copy(vec.begin(), vec.end(),
                std::insert_iterator<Uset>(member_set, member_set.begin()));
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
std::copy(std::move_iterator<Iter>(vec.begin()), std::move_iterator<Iter>(vec.end()),
std::insert_iterator<Uset>(member_set, member_set.begin()));
}
std::copy(vec.begin(), vec.end(),
std::insert_iterator<Uset>(member_set, member_set.begin()));
}

Comment on lines +499 to +501
std::unique_ptr<Cmd> ehexpiretimeptr = std::make_unique<PKHExpiretimeCmd>(
kCmdNamePKHExpiretime, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHExpiretime, std::move(ehexpiretimeptr)));
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Correct command flags for PKHExpiretimeCmd

The PKHExpiretime command retrieves the expiration time of a hash key, which is a read operation. Currently, it is flagged as a write command. Please change the flags to use kCmdFlagsRead to accurately reflect its functionality.

Apply this diff:

-  std::unique_ptr<Cmd> ehexpiretimeptr = std::make_unique<PKHExpiretimeCmd>(
-      kCmdNamePKHExpiretime, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
+  std::unique_ptr<Cmd> ehexpiretimeptr = std::make_unique<PKHExpiretimeCmd>(
+      kCmdNamePKHExpiretime, -4, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
std::unique_ptr<Cmd> ehexpiretimeptr = std::make_unique<PKHExpiretimeCmd>(
kCmdNamePKHExpiretime, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHExpiretime, std::move(ehexpiretimeptr)));
std::unique_ptr<Cmd> ehexpiretimeptr = std::make_unique<PKHExpiretimeCmd>(
kCmdNamePKHExpiretime, -4, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHExpiretime, std::move(ehexpiretimeptr)));

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 21

🧹 Outside diff range and nitpick comments (28)
include/pika_pkhash.h (10)

37-37: Consider implementing the Clear method

The Clear method is currently empty. If there's no need to reset the internal state, consider removing this method or adding a comment explaining why it's intentionally left empty.


31-32: Unused member variables

The numfields_ and fields_ member variables are declared but not used in any of the defined methods. Consider removing them if they're not needed, or add a comment explaining their purpose if they're used in the implementation file.


63-63: Consider implementing the Clear method

The Clear method is currently empty. If there's no need to reset the internal state, consider removing this method or adding a comment explaining why it's intentionally left empty.


57-58: Unused member variables

The numfields_ and fields_ member variables are declared but not used in any of the defined methods. Consider removing them if they're not needed, or add a comment explaining their purpose if they're used in the implementation file.


88-88: Consider implementing the Clear method

The Clear method is currently empty. If there's no need to reset the internal state, consider removing this method or adding a comment explaining why it's intentionally left empty.


81-83: Unused member variables

The ttl_, numfields_, and fields_ member variables are declared but not used in any of the defined methods. Consider removing them if they're not needed, or add a comment explaining their purpose if they're used in the implementation file.


114-114: Consider implementing the Clear method

The Clear method is currently empty. If there's no need to reset the internal state, consider removing this method or adding a comment explaining why it's intentionally left empty.


107-109: Unused member variables

The ttl_, numfields_, and fields_ member variables are declared but not used in any of the defined methods. Consider removing them if they're not needed, or add a comment explaining their purpose if they're used in the implementation file.


140-140: Consider implementing the Clear method

The Clear method is currently empty. If there's no need to reset the internal state, consider removing this method or adding a comment explaining why it's intentionally left empty.


133-135: Unused member variables

The ttl_, numfields_, and fields_ member variables are declared but not used in any of the defined methods. Consider removing them if they're not needed, or add a comment explaining their purpose if they're used in the implementation file.

src/storage/src/redis.cc (1)

296-495: Improved RocksDB statistics gathering

The GetRocksDBInfo function has been significantly enhanced to provide more comprehensive RocksDB statistics. This improvement will offer better insights into the database's performance and state.

While the changes are valuable, the function has become quite long. Consider refactoring it into smaller, more focused functions for better maintainability. For example:

  1. Create separate functions for different categories of statistics (e.g., WriteMemtableStats, WriteCompactionStats, etc.).
  2. Use a map of property names to metric names to reduce repetitive code in the write_property and write_ticker_count loops.

Example refactoring:

void Redis::GetRocksDBInfo(std::string& info, const char* prefix) {
  std::ostringstream string_stream;
  string_stream << "#" << prefix << "RocksDB" << "\r\n";

  WriteMemtableStats(string_stream, prefix);
  WriteCompactionStats(string_stream, prefix);
  WriteKeyStats(string_stream, prefix);
  // ... other categories

  WriteCFStats(string_stream, prefix);

  info.append(string_stream.str());
}

void Redis::WriteMemtableStats(std::ostringstream& stream, const char* prefix) {
  const std::vector<std::pair<rocksdb::Slice, const char*>> memtable_properties = {
    {rocksdb::DB::Properties::kNumImmutableMemTable, "num_immutable_mem_table"},
    {rocksdb::DB::Properties::kNumImmutableMemTableFlushed, "num_immutable_mem_table_flushed"},
    // ... other memtable properties
  };

  for (const auto& [property, metric] : memtable_properties) {
    WriteAggregatedIntProperty(stream, property, metric, prefix);
  }
}

// ... other category functions

void Redis::WriteAggregatedIntProperty(std::ostringstream& stream, const rocksdb::Slice& property, const char* metric, const char* prefix) {
  uint64_t value = 0;
  db_->GetAggregatedIntProperty(property, &value);
  stream << prefix << metric << ':' << value << "\r\n";
}

This refactoring would make the code more modular and easier to maintain.

src/storage/src/redis.h (2)

260-300: LGTM: New PK Hash methods added

The new PK Hash methods (PKHExpire, PKHGet, PKHSet, PKHMSet, etc.) provide a comprehensive set of operations for PK Hashes. The naming conventions and parameter choices are consistent with existing Redis-style methods, and the inclusion of TTL parameters for some methods is a useful feature.

Consider adding brief documentation comments for each new method to explain their purpose and any unique behaviors, especially for methods that differ from standard Redis hash operations.


537-538: Remove commented-out code and create a task for env separation

The env_ member variable has been commented out with a TODO comment. Instead of keeping commented-out code in the header file, it's better to remove it entirely and create a separate task or issue to track the planned separation of env for each RocksDB instance.

Consider removing these lines and creating a GitHub issue to track the planned work:

- // TODO(wangshaoyi): seperate env for each rocksdb instance
-  //  rocksdb::Env* env_ = nullptr;
src/storage/src/redis_strings.cc (1)

8-8: Remove unnecessary include

The <iostream> header is included but not used in this file. Removing it can slightly improve compilation time and reduce potential naming conflicts.

Apply this diff to remove the unnecessary include:

-#include <iostream>
src/storage/src/pkhash_data_value_format.h (1)

47-47: Translate or address the TODO comment.

The TODO comment at line 47 is in Chinese. For consistency and to assist all team members, please translate it to English and address the pending task.

Would you like assistance in addressing this TODO or opening a GitHub issue to track it?

src/storage/src/redis_pkhashes.cc (5)

477-477: Translate code comments to English for consistency

The comment on line 477 is written in Chinese:

// 1. 判断类型是否匹配和key是否过期。

For consistency and readability, please translate it to English.

Apply this diff to translate the comment:

-// 1. 判断类型是否匹配和key是否过期。
+// 1. Check if the type matches and whether the key has expired.

694-694: Remove unused variable meta_value_buf

The variable meta_value_buf is declared but not used in this scope. Removing it will clean up the code.

Apply this diff to remove the unused variable:

 char value_buf[32] = {0};
-char meta_value_buf[4] = {0};

236-236: Unused variable is_stale

The variable is_stale is declared but not used in the PKHExpireat function. Consider removing it to clean up the code.

Apply this diff to remove the unused variable:

 bool is_stale = false;

348-348: Unused variable is_stale

Similarly, the variable is_stale in the PKHTTL function is declared but not used.

Apply this diff:

 bool is_stale = false;

762-762: Check the status after database write operation

After calling db_->Write, it's good practice to check the returned status to handle any potential write errors.

Add error handling after the write operation:

 s = db_->Write(default_write_options_, &batch);
+if (!s.ok()) {
+  return s;
+}
 UpdateSpecificKeyStatistics(DataType::kPKHashes, key.ToString(), statistic);
 return s;
src/storage/include/storage/storage.h (2)

124-127: Use int64_t for ttl to support larger time values

Consider changing ttl from int32_t to int64_t to support larger TTL values and maintain consistency with other time-related variables.

Apply this diff:

 struct FieldValueTTL {
   std::string field;
   std::string value;
-  int32_t ttl;
+  int64_t ttl;
   bool operator==(const FieldValueTTL& fv) const { return (fv.field == field && fv.value == value && fv.ttl == ttl); }
 };

442-442: Use int64_t for ttl default parameter in PKHIncrby

Changing the ttl parameter from int32_t to int64_t allows for larger TTL values and enhances consistency.

Apply this diff:

-Status PKHIncrby(const Slice& key, const Slice& field, int64_t value, int64_t* ret, int32_t ttl = 0);
+Status PKHIncrby(const Slice& key, const Slice& field, int64_t value, int64_t* ret, int64_t ttl = 0);
src/storage/tests/pkhashes_test.cc (6)

139-139: Typo in TODO comment

There's a typo in the TODO comment on line 139: "fisrt" should be "first".

Apply this change to correct the typo:

-  // TODO (DDD: cmd basic test cases fisrt)
+  // TODO (DDD: cmd basic test cases first)

576-576: Inconsistent language in comment

The comment on line 576 is in Chinese, while the rest of the codebase is in English. For consistency, please translate it to English.

Apply this change:

-  // 从这里开始有问题。
+  // There is an issue starting from here.

159-160: Check the return status of GetCurrentTime

The function rocksdb::Env::GetCurrentTime may return a status that should be checked to ensure it succeeded.

If GetCurrentTime returns a status, capture and assert it:

-  rocksdb::Env::Default()->GetCurrentTime(&unix_time);
+  rocksdb::Status s_time = rocksdb::Env::Default()->GetCurrentTime(&unix_time);
+  ASSERT_TRUE(s_time.ok());

If it doesn't return a status, this can be ignored.


624-625: Add assertion after Setex call

In the PKHSetTest, when calling db.Setex, you assign the status to s but do not check its value immediately.

Consider adding an assertion to verify the operation was successful:

  s = db.Setex("GP4_HSET_KEY", "STRING_VALUE_WITH_TTL", 1);
+ ASSERT_TRUE(s.ok());

345-347: Add assertion to verify hash length

In the PKHSetexTest, after setting a field with expiration, you retrieve the length of the hash but do not assert its value.

Consider adding an assertion to confirm that the hash length is as expected:

  s = db.PKHLen("GP1_HSET_KEY", &ret);
  ASSERT_TRUE(s.ok());
+ ASSERT_EQ(ret, 1);

280-281: Remove unused variable rets

In the PKHPersistTest, the variable rets is declared but not used after line 280.

If rets is unnecessary, consider removing it to clean up the code:

-  std::vector<int32_t> rets;
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 25c41e8 and 801f122.

📒 Files selected for processing (19)
  • include/acl.h (1 hunks)
  • include/pika_command.h (5 hunks)
  • include/pika_pkhash.h (1 hunks)
  • src/pika_client_conn.cc (1 hunks)
  • src/pika_command.cc (10 hunks)
  • src/pika_pkhash.cc (1 hunks)
  • src/storage/include/storage/storage.h (13 hunks)
  • src/storage/include/storage/storage_define.h (2 hunks)
  • src/storage/src/base_filter.h (6 hunks)
  • src/storage/src/base_value_format.h (5 hunks)
  • src/storage/src/pkhash_data_value_format.h (1 hunks)
  • src/storage/src/redis.cc (12 hunks)
  • src/storage/src/redis.h (14 hunks)
  • src/storage/src/redis_hashes.cc (29 hunks)
  • src/storage/src/redis_pkhashes.cc (1 hunks)
  • src/storage/src/redis_strings.cc (39 hunks)
  • src/storage/src/storage.cc (28 hunks)
  • src/storage/tests/hashes_test.cc (3 hunks)
  • src/storage/tests/pkhashes_test.cc (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (6)
  • include/acl.h
  • src/pika_client_conn.cc
  • src/storage/include/storage/storage_define.h
  • src/storage/src/base_filter.h
  • src/storage/src/redis_hashes.cc
  • src/storage/tests/hashes_test.cc
🧰 Additional context used
🪛 cppcheck
src/storage/src/redis_pkhashes.cc

[performance] 565-565: Searching before insertion is not necessary.

(stlFindInsert)


[performance] 774-774: Searching before insertion is not necessary.

(stlFindInsert)


[performance] 857-857: Searching before insertion is not necessary.

(stlFindInsert)

src/storage/src/redis_strings.cc

[error] 917-917: Unmatched '{'. Configuration

(syntaxError)

🔇 Additional comments (55)
src/storage/src/base_value_format.h (4)

57-57: Approved: Virtual destructor for InternalValue class.

The change to make the destructor virtual is a good practice. It ensures proper cleanup of derived classes when deleting through a base class pointer.


87-87: Approved: Formatting improvements in InternalValue class.

The formatting changes to the protected access specifier improve code readability and maintain consistency with the overall coding style.


Line range hint 99-164: Approved: Formatting improvements in ParsedInternalValue class.

The formatting changes to the public access specifier and the IsValid() method improve code readability and maintain consistency with the overall coding style.


21-35: ⚠️ Potential issue

Update DataTypeTag array to include new enum values.

The DataType enum and DataTypeStrings array have been correctly updated to include the new kPKHashes and kAll values. However, the DataTypeTag array still needs to be updated to include tags for these new enum values.

Please update the DataTypeTag array as follows:

constexpr char DataTypeTag[] = {'k', 'h', 's', 'l', 'z', 'x', 'p', 'n', 'a'};

Where 'p' represents kPKHashes and 'a' represents kAll.

include/pika_pkhash.h (2)

1-469: Overall review summary

The file introduces a comprehensive set of command classes for PKHash operations. The structure is consistent across most classes, which is good for maintainability. However, there are a few minor issues to address:

  1. Several classes have empty Clear methods. Consider removing them if not needed or adding comments to explain their purpose.
  2. Some classes have uninitialized member variables (is_force_ in PKHLenCmd, is_wt_ in PKHGetAllCmd). These should be initialized to prevent undefined behavior.
  3. The PKHScanCmd class's Clear method doesn't initialize all member variables.
  4. Some classes have unused member variables. Consider removing them if they're not needed in the implementation.

Addressing these issues will improve the overall code quality and reduce the potential for bugs.


459-463: ⚠️ Potential issue

Initialize the cursor_ member variable in the Clear() method

The Clear() method initializes pattern_, count_, and is_wt_, but not cursor_. This may lead to unexpected behavior if cursor_ is used after calling Clear(). Consider initializing cursor_ in the Clear() method as well.

Apply this diff to initialize cursor_:

 virtual void Clear() {
   pattern_ = "*";
   count_ = 10;
   is_wt_ = false;
+  cursor_ = 0;
 }

Likely invalid or redundant comment.

include/pika_command.h (2)

140-158: LGTM! New PKHash command constants added.

The addition of these new PKHash command constants is consistent with the existing structure and naming conventions. This enhancement expands the command set for PKHash operations in the Pika database system.


313-313: LGTM! New kCmdFlagsPKHash flag added.

The addition of the kCmdFlagsPKHash flag to the CmdFlags enum is appropriate and follows the existing pattern of command type flags.

To ensure proper integration of the new PKHash commands and flag, please run the following verification script:

✅ Verification successful

Verification Successful: kCmdFlagsPKHash flag is appropriately integrated.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the usage of new PKHash commands and flag

# Check for PKHash command implementations
echo "Searching for PKHash command implementations:"
rg "class.*Cmd.*public.*Cmd" --type cpp | grep -E "PKH(Set|Expire|Expireat|Expiretime|TTL|Persist|Get|Exists|Del|Len|Strlen|Incrby|MSet|MGet|Keys|Vals|Getall|Scan)Cmd"

# Check for usage of kCmdFlagsPKHash flag
echo "Searching for usage of kCmdFlagsPKHash flag:"
rg "kCmdFlagsPKHash" --type cpp

# Check for registration of PKHash commands
echo "Searching for registration of PKHash commands:"
rg "cmd_table->insert" --type cpp | grep -E "PKH(Set|Expire|Expireat|Expiretime|TTL|Persist|Get|Exists|Del|Len|Strlen|Incrby|MSet|MGet|Keys|Vals|Getall|Scan)"

Length of output: 6467

src/storage/src/redis.cc (4)

218-218: Compaction for new PKHash column family added

The addition of compaction for the new PKHash column family is consistent with the implementation of the new functionality.


493-493: Corrected method call to GetMapProperty

The method call to GetMapProperty has been corrected by removing the unnecessary rocksdb::DB:: qualification. This change improves code correctness.


42-42: Verify the impact of commenting out env_ initialization

The initialization of env_ has been commented out. Please ensure that this change doesn't negatively impact the functionality of the Redis class, especially in areas where env_ might have been used.

To check for potential issues, run the following script:

✅ Verification successful

env_ Initialization Commented Out Successfully

Commenting out the initialization of env_ has no impact on the current functionality, as there are no active uses of env_ in the codebase.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for uses of env_ in the codebase
rg '\benv_\b' --type cpp

Length of output: 192


102-110: New column family for "pika hash" added

The addition of the new column family for "pika hash" aligns with the PR objectives. This enhancement will support the new PKHash operations.

To ensure proper integration, please verify that:

  1. The new column family is correctly used in relevant PKHash operations.
  2. The compaction and filtering logic for this new column family is consistent with other similar data types.

Run the following script to check for usage of the new column family:

✅ Verification successful

Integration of "pika hash" column family verified

The new column family is properly integrated and utilized within src/storage/src/redis.cc. No further issues found.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for uses of pika_hash_data_cf in the codebase
rg 'pika_hash_data_cf' --type cpp

Length of output: 676

src/storage/src/redis.h (7)

17-19: LGTM: New include statements added

The new include statements for env.h, pika_codis_slot.h, custom_comparator.h, and redis_streams.h are appropriate additions, likely supporting the new PK Hash functionality and other changes in this file.

Also applies to: 24-24


416-417: LGTM: Improved CreateIterator with range support

The updates to the CreateIterator method, adding lower_bound and upper_bound parameters, enhance the functionality by allowing range-based iteration. This is a valuable improvement that provides more control over the iteration process.

Also applies to: 421-422


473-474: LGTM: ExpectedStale method updated for PKHashes

The ExpectedStale method has been correctly updated to include support for PKHashes, consistent with the new PK Hash functionality added throughout the file. The implementation for PKHashes matches that of regular Hashes, which is appropriate.


549-552: LGTM: Improved scan methods with type support

The updates to GetScanStartPoint and StoreScanNextPoint methods, adding a DataType& type parameter, enhance the functionality by allowing type-specific scan operations. This change is consistent with the overall improvements in the file for supporting different data types and provides more flexibility in scan operations.


Line range hint 1-564: Summary of changes and action items

This update to redis.h introduces significant new functionality, primarily focused on PK Hash operations, and includes various improvements to existing methods. The changes are generally well-implemented and consistent. However, there are a few items that need attention:

  1. Resolve the merge conflict in the Expire methods (lines 116-130).
  2. Address potential thread-safety issues in the KeyStatistics struct (lines 77-78).
  3. Fix the use of rvalue reference parameters with default values in GetStreamMeta and XRange methods (lines 384-385, 395-396).
  4. Remove commented-out code for the env_ member variable (lines 537-538).
  5. Consider adding brief documentation for the new PK Hash methods.

Once these items are addressed, the changes will significantly enhance the capabilities of the Redis class while maintaining code quality and consistency.


160-161: LGTM: Updated method signatures for improved functionality

The changes to BitOp, SDiffstore, SInterstore, and SUnionstore methods, adding the value_to_dest parameter and adjusting return types, appear to be part of a broader refactoring. These changes look good and consistent across methods.

To ensure these changes don't break existing code, please run:

#!/bin/bash
# Search for usages of the modified methods
rg "\b(BitOp|SDiffstore|SInterstore|SUnionstore)\b" --type cpp

Also applies to: 306-307, 309-310, 319-320


Line range hint 60-78: LGTM: KeyStatistics struct updated, but consider thread safety

The changes to the KeyStatistics struct, including the improved AvgDuration method and new AddModifyCount and ModifyCount methods, look good. However, if these methods might be accessed concurrently, consider using atomic operations or mutex locks to ensure thread safety.

To verify potential concurrent access, please run:

src/storage/src/redis_strings.cc (9)

139-141: Use a formatting library for error messages

As suggested in previous comments, consider using a formatting library like fmt or std::format (C++20) to improve the performance and maintainability of error messages throughout the codebase.

Example using fmt:

return Status::InvalidArgument(fmt::format("WRONGTYPE, key: {}, expect type: {}, get type: {}",
                               key.ToString(),
                               DataTypeStrings[static_cast<int>(DataType::kStrings)],
                               DataTypeStrings[static_cast<int>(GetMetaValueType(value))]));

82-84: Improve error message construction

As suggested in previous comments, consider creating a helper function to format error messages. This will improve maintainability and reduce redundancy throughout the codebase.

Example helper function:

Status GenerateWrongTypeError(const Slice& key, DataType expected_type, DataType actual_type) {
  return Status::InvalidArgument(fmt::format("WRONGTYPE, key: {}, expect type: {}, get type: {}",
                                 key.ToString(),
                                 DataTypeStrings[static_cast<int>(expected_type)],
                                 DataTypeStrings[static_cast<int>(actual_type)]));
}

Use this helper function in your methods to simplify error handling and improve consistency.


915-920: ⚠️ Potential issue

Resolve merge conflict in Setvx function signature

There's a merge conflict in the Setvx function signature. The parameter name has changed from ttl_millsec to ttl, which might affect other parts of the code that use this function.

Resolve the merge conflict by choosing the appropriate implementation:

Status Redis::Setvx(const Slice& key, const Slice& value, const Slice& new_value, int32_t* ret, int64_t ttl) {

After resolving the conflict, run the following script to verify the usage of the Setvx function throughout the codebase:

#!/bin/bash
# Search for all occurrences of the Setvx function
rg "Setvx\s*\(" --type cpp

Ensure that all calls to Setvx are updated with the correct parameter name.

🧰 Tools
🪛 cppcheck

[error] 917-917: Unmatched '{'. Configuration

(syntaxError)


1692-1694: 🛠️ Refactor suggestion

Verify implementation of PKPatternMatchDelWithRemoveKeys

The PKPatternMatchDelWithRemoveKeys function has been modified to include additional parameters for pattern matching and limiting the number of keys to delete.

Please review the implementation to ensure it correctly handles:

  1. Pattern matching for keys
  2. Respecting the max_count limit
  3. Properly populating the remove_keys vector

Run the following script to check for similar pattern matching implementations in the codebase:

#!/bin/bash
# Search for pattern matching implementations
rg "StringMatch|PatternMatch" --type cpp

Consider extracting the pattern matching logic into a separate helper function to improve code reusability and maintainability. For example:

bool MatchesPattern(const std::string& key, const std::string& pattern) {
  return StringMatch(pattern.data(), pattern.size(), key.data(), key.size(), 0) != 0;
}

This helper function can then be used in the PKPatternMatchDelWithRemoveKeys function and other similar implementations.


1283-1289: ⚠️ Potential issue

Resolve merge conflict in PKSetexAt function signature and address TODO

There's a merge conflict in the PKSetexAt function signature. The parameter name has changed from time_stamp_millsec_ to timestamp, which might affect other parts of the code that use this function.

  1. Resolve the merge conflict by choosing the appropriate implementation:
// TODO(wangshaoyi): timestamp uint64_t
Status Redis::PKSetexAt(const Slice& key, const Slice& value, int64_t timestamp) {
  1. Address the TODO comment by changing the timestamp parameter type to uint64_t:
Status Redis::PKSetexAt(const Slice& key, const Slice& value, uint64_t timestamp) {

After resolving the conflict and addressing the TODO, run the following script to verify the usage of the PKSetexAt function throughout the codebase:

#!/bin/bash
# Search for all occurrences of the PKSetexAt function
rg "PKSetexAt\s*\(" --type cpp

Ensure that all calls to PKSetexAt are updated with the correct parameter name and type.

Would you like me to create a GitHub issue to track the task of changing the timestamp parameter type to uint64_t throughout the codebase?


1576-1582: ⚠️ Potential issue

Resolve merge conflict in Expire function and verify PKHashes support

There's a merge conflict in the Expire function, and new cases for DataType::kPKHashes have been added.

Resolve the merge conflict by integrating the new PKHashes case:

case DataType::kHashes:
  return HashesExpire(key, ttl, std::move(meta_value));
case DataType::kPKHashes:
  return PKHashesExpire(key, ttl, std::move(meta_value));
case DataType::kLists:
  return ListsExpire(key, ttl_millsec, std::move(meta_value));

After resolving the conflict, run the following script to verify the implementation of PKHashes support throughout the codebase:

#!/bin/bash
# Search for all occurrences of PKHashes-related functions
rg "PKHashes" --type cpp

Ensure that PKHashes support is consistently implemented across all relevant functions (e.g., Expire, Expireat, Del, TTL, etc.).


Line range hint 1-1694: Final verification of changes and consistency

The file has undergone several changes, including:

  1. Modification of function signatures
  2. Addition of PKHashes support
  3. Changes in parameter names

To ensure consistency throughout the file and the entire codebase, please perform the following steps:

  1. Resolve all merge conflicts as suggested in previous comments.
  2. Run the following script to check for any remaining inconsistencies in parameter naming:
#!/bin/bash
# Search for inconsistent parameter names
rg "ttl_millsec|expired_timestamp_millsec|time_stamp_millsec_" --type cpp
  1. Verify that all functions using or returning time-related values are consistent in their use of milliseconds or seconds.

  2. Ensure that the PKHashes support is implemented consistently across all relevant functions.

  3. Run your test suite to catch any potential issues introduced by these changes.

Consider creating a separate file for PKHashes-related operations if the codebase continues to grow, to maintain better separation of concerns and improve code organization.


65-70: ⚠️ Potential issue

Resolve merge conflict in Append function signature

There's a merge conflict in the Append function signature. The parameter name has changed from expired_timestamp_millsec to expired_timestamp_sec, which might affect other parts of the code that use this function.

Resolve the merge conflict by choosing the appropriate implementation:

Status Redis::Append(const Slice& key, const Slice& value, int32_t* ret, int64_t* expired_timestamp_sec,
                     std::string& out_new_value) {

After resolving the conflict, run the following script to verify the usage of the Append function throughout the codebase:

Ensure that all calls to Append are updated with the correct parameter name.

✅ Verification successful

Merge conflict in Append function signature resolved correctly

The parameter name change from expired_timestamp_millsec to expired_timestamp_sec does not affect functionality as parameter names do not need to match in C++ function calls. All usages are consistent by parameter position.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for all occurrences of the Append function
rg "Append\s*\(" --type cpp

Length of output: 4366


528-534: ⚠️ Potential issue

Resolve merge conflict in GetrangeWithValue function signature

There's a merge conflict in the GetrangeWithValue function signature. The parameter name has changed from ttl_millsec to ttl, which might affect other parts of the code that use this function.

Resolve the merge conflict by choosing the appropriate implementation:

Status Redis::GetrangeWithValue(const Slice& key, int64_t start_offset, int64_t end_offset, std::string* ret,
                                std::string* value, int64_t* ttl) {

After resolving the conflict, run the following script to verify the usage of the GetrangeWithValue function throughout the codebase:

Ensure that all calls to GetrangeWithValue are updated with the correct parameter name.

src/storage/src/pkhash_data_value_format.h (1)

31-31: Verify that DataType::kNones is correct.

Please ensure that DataType::kNones is the intended data type. If kNones is correct, no action is needed. Otherwise, consider updating it to the appropriate data type.

src/pika_pkhash.cc (1)

660-680: ⚠️ Potential issue

Fix incorrect variable usage in PKHScanCmd::Do()

In the error handling section, you're checking s_.IsInvalidArgument(), but s_ is not defined in this scope. The variable s holds the status returned by HScan and should be used instead.

Apply this diff to correct the variable:

-  } else if (s_.IsInvalidArgument()) {
+  } else if (s.IsInvalidArgument()) {

Likely invalid or redundant comment.

src/storage/src/redis_pkhashes.cc (2)

217-220: ⚠️ Potential issue

Correct the error message regarding expiration timestamp

The condition checks if timestamp <= 0, but the error message says "must be >= 0", which is inconsistent with the condition. Since a timestamp of zero is invalid, the error message should state "must be > 0" to be accurate.

Apply this diff to correct the error message:

 if (timestamp <= 0) {
   rets->assign(numfields, 2);
-  return Status::InvalidArgument("invalid expire time, must be >= 0");
+  return Status::InvalidArgument("invalid expire time, must be > 0");
 }

Likely invalid or redundant comment.


155-157: ⚠️ Potential issue

Correct the error message regarding TTL value

Similarly, the condition checks if ttl <= 0, but the error message states "must be >= 0", which includes zero. Since a TTL of zero is invalid, the error message should read "must be > 0" to be accurate.

Apply this diff to correct the error message:

 if (ttl <= 0) {
-  return Status::InvalidArgument("invalid expire time, must be >= 0");
+  return Status::InvalidArgument("invalid expire time, must be > 0");
 }

Likely invalid or redundant comment.

src/storage/include/storage/storage.h (10)

28-28: Include directive added for pkhash_data_value_format.h

The inclusion of pkhash_data_value_format.h is appropriate for the new PKHash functionalities.


Line range hint 98-103: Duplicate: Correct the spelling of invaild_keys

The variable invaild_keys is misspelled. This issue has been previously flagged and remains unaddressed.


124-129: Duplicate: Consider initializing members in the constructor for FieldValueTTL

Adding constructors to FieldValueTTL can prevent uninitialized members and improve code clarity. This issue has been previously flagged and remains unaddressed.


178-178: Duplicate: Restrict access to the Storage() default constructor

To prevent unintended use in production code, consider making the default constructor private or protected. This issue has been previously flagged and remains unaddressed.


278-279: Duplicate: Ensure consistent use of pointers for output parameters in BitOp method

The output parameter value_to_dest should be a pointer for consistency. This issue has been previously flagged and remains unaddressed.


490-491: Duplicate: Ensure consistent use of pointers for output parameters in SDiffstore method

The output parameter value_to_dest should be a pointer for consistency. This issue has been previously flagged and remains unaddressed.


574-575: Duplicate: Ensure consistent use of pointers for output parameters in SUnionstore method

The output parameter value_to_dest should be a pointer for consistency. This issue has been previously flagged and remains unaddressed.


1008-1008: Duplicate: Use a pointer for the output parameter in XInfo

Passing the output parameter result as a pointer maintains consistency in the codebase. This issue has been previously flagged and remains unaddressed.


1153-1154: Duplicate: Correct the spelling of EnableDymayticOptions to EnableDynamicOptions

The method name contains a typographical error. This issue has been previously flagged and remains unaddressed.


416-427: Duplicate: Remove redundant numfields parameter in PKH methods

The numfields parameter is redundant since the number of fields can be obtained from fields.size(). This issue has been previously flagged and remains unaddressed.

src/storage/tests/pkhashes_test.cc (2)

1053-1058: Handle not found case correctly in PKHVals

In the PKHVals test, when the hash table does not exist, you correctly check that s.IsNotFound() and the values vector is empty.


711-723: Verify handling of expired keys in PKHMGetTest

In the PKHMGetTest, after expiring the key GP4_HMGET_KEY, you test fetching fields and expect them to be not found.

The test logic is correct. Ensure that the expiration mechanism works as intended and that expired keys do not return values.

src/storage/src/storage.cc (8)

490-494: Duplicate comment: Use int64_t for TTL parameter in PKHExpire

The previous review comment regarding changing the ttl parameter from int32_t to int64_t in PKHExpire is still applicable.


530-533: Duplicate comment: Use int64_t for TTL parameter in PKHSetex

The prior suggestion to change the ttl parameter from int32_t to int64_t in PKHSetex remains valid.


555-558: Duplicate comment: Use int64_t for TTL parameter in PKHIncrby

The earlier recommendation to modify the ttl parameter from int32_t to int64_t in PKHIncrby still applies.


1135-1136: Duplicate comment: Specify parameter type in lambda function for C++11 compatibility

The previous comment about replacing auto with an explicit parameter type in the lambda function to ensure C++11 compatibility is still relevant.


1975-1975: Duplicate comment: Compatibility of std::plus<> in std::transform

The concern regarding the use of std::plus<>{} requiring C++14 or newer is still applicable. The previous suggestion to specify the type or use a lambda function remains valid.


1455-1455: Duplicate comment: Verify loop condition in PKScanRange for off-by-one error

The potential off-by-one error in the loop condition of PKScanRange has been previously mentioned and is still pertinent.


1501-1501: Duplicate comment: Verify loop condition in PKRScanRange for off-by-one error

The earlier note about a possible off-by-one error in the PKRScanRange function's loop condition continues to be relevant.


821-823: Duplicate comment: Avoid moving elements into std::unordered_set

The prior issue regarding the use of std::move_iterator with std::copy into an unordered_set is still valid.

src/pika_command.cc (3)

486-488: LGTM: Addition of PKHSet Command

The PKHSetCmd is correctly added with appropriate flags and arity. The implementation aligns with the intended functionality for setting hash values with expiration.


512-514: LGTM: Addition of PKHGet Command

The PKHGetCmd is properly implemented with the correct flags for a read operation. The command should function as expected for retrieving hash values.


516-518: Verify the Parameter Count for PKHSetexCmd

The PKHSetexCmd is added, but ensure that the arity and parameter parsing handle all required arguments correctly, especially with variable arguments.

Run the following script to check the command usage:

✅ Verification successful

Parameter Count for PKHSetexCmd Verified

The PKHSetexCmd is instantiated with an arity of -4, and the parameter parsing in its implementation correctly handles the required arguments, including variable arguments.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the arity and usage of PKHSetex command.

# Test: Search for PKHSetex command definitions.
rg 'PKHSetexCmd' -A 5

Length of output: 2332


private:
std::string key_;
bool is_force_;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Initialize the is_force_ member variable

The is_force_ member variable is declared but not initialized. This may lead to undefined behavior if it's used before being assigned a value. Consider initializing it in the constructor or implementing a Clear() method to reset the internal state.


private:
std::string key_;
bool is_wt_;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Initialize the is_wt_ member variable

The is_wt_ member variable is declared but not initialized. This may lead to undefined behavior if it's used before being assigned a value. Consider initializing it in the constructor or implementing a Clear() method to reset the internal state.

Comment on lines 116 to 130
<<<<<<< HEAD
virtual Status StringsExpire(const Slice& key, int64_t ttl_millsec, std::string&& prefetch_meta = {});
virtual Status HashesExpire(const Slice& key, int64_t ttl_millsec, std::string&& prefetch_meta = {});
virtual Status ListsExpire(const Slice& key, int64_t ttl_millsec, std::string&& prefetch_meta = {});
virtual Status ZsetsExpire(const Slice& key, int64_t ttl_millsec, std::string&& prefetch_meta = {});
virtual Status SetsExpire(const Slice& key, int64_t ttl_millsec, std::string&& prefetch_meta = {});
=======
virtual Status StringsExpire(const Slice& key, int64_t ttl, std::string&& prefetch_meta = {});
virtual Status HashesExpire(const Slice& key, int64_t ttl, std::string&& prefetch_meta = {});
virtual Status PKHashesExpire(const Slice& key, int64_t ttl, std::string&& prefetch_meta = {});

virtual Status ListsExpire(const Slice& key, int64_t ttl, std::string&& prefetch_meta = {});
virtual Status ZsetsExpire(const Slice& key, int64_t ttl, std::string&& prefetch_meta = {});
virtual Status SetsExpire(const Slice& key, int64_t ttl, std::string&& prefetch_meta = {});
>>>>>>> 9d208a64 (feat: add pkhash tests and hash basic cmd)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Resolve merge conflict in Expire methods

There's a merge conflict in the Expire methods that needs to be resolved. When resolving:

  1. Use the updated parameter name ttl instead of ttl_millsec for consistency.
  2. Include the new PKHashesExpire method.
  3. Ensure all methods use the std::string&& prefetch_meta = {} parameter.

Here's a suggested resolution:

virtual Status StringsExpire(const Slice& key, int64_t ttl, std::string&& prefetch_meta = {});
virtual Status HashesExpire(const Slice& key, int64_t ttl, std::string&& prefetch_meta = {});
virtual Status PKHashesExpire(const Slice& key, int64_t ttl, std::string&& prefetch_meta = {});
virtual Status ListsExpire(const Slice& key, int64_t ttl, std::string&& prefetch_meta = {});
virtual Status ZsetsExpire(const Slice& key, int64_t ttl, std::string&& prefetch_meta = {});
virtual Status SetsExpire(const Slice& key, int64_t ttl, std::string&& prefetch_meta = {});

Comment on lines +384 to +376
Status XRange(const Slice& key, const StreamScanArgs& args, std::vector<IdMessage>& id_messages,
std::string&& prefetch_meta = {});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Potential issue with rvalue reference parameter

The GetStreamMeta and XRange methods have been updated to include a std::string&& prefetch_meta = {} parameter. While this is consistent with other methods, using an rvalue reference with a default value is not standard C++ practice and may lead to compilation errors or unexpected behavior.

Consider changing these parameters to const std::string& prefetch_meta = {} or std::string prefetch_meta = {} instead.

Apply this diff to fix the issue:

- Status XRange(const Slice& key, const StreamScanArgs& args, std::vector<IdMessage>& id_messages,
-               std::string&& prefetch_meta = {});
+ Status XRange(const Slice& key, const StreamScanArgs& args, std::vector<IdMessage>& id_messages,
+               const std::string& prefetch_meta = {});

- Status GetStreamMeta(StreamMetaValue& tream_meta, const rocksdb::Slice& key, rocksdb::ReadOptions& read_options,
-                      std::string&& prefetch_meta = {});
+ Status GetStreamMeta(StreamMetaValue& tream_meta, const rocksdb::Slice& key, rocksdb::ReadOptions& read_options,
+                      const std::string& prefetch_meta = {});

Also applies to: 395-396

Comment on lines +281 to +291
int32_t ret = 0;
s_ = db_->storage()->PKHSet(key_, field_, value_, &ret);
if (s_.ok()) {
res_.AppendContent(":" + std::to_string(ret));
AddSlotKey("h", key_, db_);
} else if (s_.IsInvalidArgument()) {
res_.SetRes(CmdRes::kMultiKey);
} else {
res_.SetRes(CmdRes::kErrOther, s_.ToString());
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Implement expiration functionality in PKHSetexCmd

The PKHSetexCmd class currently does not implement the expiration functionality and is identical to PKHSetCmd, lacking the handling of expiration time for the field-value pair.

Please update PKHSetexCmd to correctly parse the expiration time in DoInitial() and utilize a storage method that sets both the field-value pair and its expiration in Do(). For example:

void PKHSetexCmd::DoInitial() {
  if (!CheckArg(argv_.size())) {
    res_.SetRes(CmdRes::kWrongNum, kCmdNamePKHSetex);
    return;
  }
  key_ = argv_[1];
  field_ = argv_[2];
  if (pstd::string2int(argv_[3].data(), argv_[3].size(), &ttl_) != 1) {
    res_.SetRes(CmdRes::kInvalidInt);
    return;
  }
  value_ = argv_[4];
}

void PKHSetexCmd::Do() {
  int32_t ret = 0;
  s_ = db_->storage()->PKHSetex(key_, field_, value_, ttl_, &ret);
  if (s_.ok()) {
    res_.AppendContent(":" + std::to_string(ret));
    AddSlotKey("h", key_, db_);
  } else if (s_.IsInvalidArgument()) {
    res_.SetRes(CmdRes::kMultiKey);
  } else {
    res_.SetRes(CmdRes::kErrOther, s_.ToString());
  }
}

Comment on lines +416 to +422
Status PKHExpire(const Slice& key, int32_t ttl, int32_t numfields, const std::vector<std::string>& fields,
std::vector<int32_t>* rets);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Use int64_t for ttl parameters to support larger expiration times

Consider changing the ttl parameter from int32_t to int64_t in PKHExpire and PKHSetex to support larger expiration times and maintain consistency with other TTL parameters.

Apply this diff:

 // In PKHExpire
-Status PKHExpire(const Slice& key, int32_t ttl, int32_t numfields, const std::vector<std::string>& fields,
+Status PKHExpire(const Slice& key, int64_t ttl, int32_t numfields, const std::vector<std::string>& fields,
                   std::vector<int32_t>* rets);

 // In PKHSetex
-Status PKHSetex(const Slice& key, const Slice& field, const Slice& value, int32_t ttl, int32_t* ret);
+Status PKHSetex(const Slice& key, const Slice& field, const Slice& value, int64_t ttl, int32_t* ret);

Also applies to: 432-433

Comment on lines +870 to +877
ASSERT_TRUE(s.IsCorruption());
ASSERT_EQ(value, 0);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Use IsInvalidArgument instead of IsCorruption

Similarly, update the status check when attempting to increment a field with invalid numeric data.

Apply this change:

-  ASSERT_TRUE(s.IsCorruption());
+  ASSERT_TRUE(s.IsInvalidArgument());
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ASSERT_TRUE(s.IsCorruption());
ASSERT_EQ(value, 0);
ASSERT_TRUE(s.IsInvalidArgument());
ASSERT_EQ(value, 0);

Comment on lines +861 to +868
ASSERT_TRUE(s.IsCorruption());
ASSERT_EQ(value, 0);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Use IsInvalidArgument instead of IsCorruption

In the PKHIncrby test, when incrementing a field with invalid numeric data, you check s.IsCorruption(). It's more appropriate to check for s.IsInvalidArgument() in cases of invalid input.

Apply this change:

-  ASSERT_TRUE(s.IsCorruption());
+  ASSERT_TRUE(s.IsInvalidArgument());
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ASSERT_TRUE(s.IsCorruption());
ASSERT_EQ(value, 0);
ASSERT_TRUE(s.IsInvalidArgument());
ASSERT_EQ(value, 0);

s = db.PKHSet("HINCRBY_KEY", "HINCRBY_NUM_FIELD", "10", &ret);
ASSERT_TRUE(s.ok());
s = db.PKHIncrby("HINCRBY_KEY", "HINCRBY_NUM_FIELD", 9223372036854775807, &value);
ASSERT_TRUE(s.IsInvalidArgument());
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Check for integer overflow error

When incrementing a field by a value that would cause an integer overflow, ensure you're checking for the correct error status.

If the storage layer provides a specific status for overflow, such as IsOverflow(), consider using it. Otherwise, confirm that IsInvalidArgument() is appropriate.

-  ASSERT_TRUE(s.IsInvalidArgument());
+  ASSERT_TRUE(s.IsInvalidArgument() || s.IsOverflow());

Ensure that the storage layer returns an appropriate status for integer overflow.

Committable suggestion was skipped due to low confidence.

Comment on lines +2016 to +2017
Status Storage::EnableDymayticOptions(const OptionType& option_type, const std::string& db_type,
const std::unordered_map<std::string, std::string>& options) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Typo in function name EnableDymayticOptions

The function name EnableDymayticOptions appears to have a typographical error. It should be EnableDynamicOptions to correctly reflect the intended meaning.

Apply this diff to correct the function name:

-Status Storage::EnableDymayticOptions(const OptionType& option_type, const std::string& db_type,
+Status Storage::EnableDynamicOptions(const OptionType& option_type, const std::string& db_type,
                                        const std::unordered_map<std::string, std::string>& options) {

Also, ensure all references to this function are updated accordingly.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Status Storage::EnableDymayticOptions(const OptionType& option_type, const std::string& db_type,
const std::unordered_map<std::string, std::string>& options) {
Status Storage::EnableDynamicOptions(const OptionType& option_type, const std::string& db_type,
const std::unordered_map<std::string, std::string>& options) {

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 30

🧹 Outside diff range and nitpick comments (12)
include/pika_pkhash.h (3)

14-38: Consider implementing Clear method and ensure DoInitial is implemented.

The Clear method is currently empty. Consider implementing it to reset the object's state if needed. Also, ensure that the DoInitial method is implemented in the corresponding .cc file.


143-164: Consider adding a Clear method for consistency.

Unlike the previous classes, PKHGetCmd doesn't have a Clear method. Consider adding one to maintain consistency across all command classes, even if it's empty.


1-469: Overall good structure with minor consistency issues

The file introduces 19 new command classes for PKHash operations, maintaining a consistent structure across most classes. However, there are a few areas for improvement:

  1. Consider adding a Clear method to all classes for consistency.
  2. Ensure all member variables are properly initialized, either in the constructor or in the Clear method.
  3. Implement the DoInitial methods in the corresponding .cc file.

These changes will improve the overall consistency and robustness of the code.

src/storage/src/redis.cc (1)

296-495: Comprehensive update to GetRocksDBInfo function

The GetRocksDBInfo function has been significantly refactored to provide more detailed information about the RocksDB instance. This includes:

  1. New lambda functions for writing properties and ticker counts.
  2. Additional metrics for memtables, compaction, background errors, keys, SST files, block cache, and blob files.
  3. Detailed RocksDB ticker information.

These changes greatly improve the observability of the RocksDB instance. However, there are a few points to consider:

  1. The function has become quite long. Consider breaking it down into smaller, more manageable functions.
  2. Some of the new metrics might have performance implications when called frequently. Ensure that this function is not called in any hot paths.

Consider refactoring this function into smaller, more focused functions for better maintainability. For example:

void Redis::GetRocksDBInfo(std::string& info, const char* prefix) {
    std::ostringstream string_stream;
    string_stream << "#" << prefix << "RocksDB" << "\r\n";

    WriteAggregatedProperties(string_stream, prefix);
    WriteColumnFamilyProperties(string_stream, prefix);
    WriteTickerCounts(string_stream, prefix);
    WriteColumnFamilyStats(string_stream, prefix);

    info.append(string_stream.str());
}

void Redis::WriteAggregatedProperties(std::ostringstream& stream, const char* prefix) {
    // Write aggregated properties here
}

void Redis::WriteColumnFamilyProperties(std::ostringstream& stream, const char* prefix) {
    // Write column family properties here
}

void Redis::WriteTickerCounts(std::ostringstream& stream, const char* prefix) {
    // Write ticker counts here
}

void Redis::WriteColumnFamilyStats(std::ostringstream& stream, const char* prefix) {
    // Write column family stats here
}

This refactoring would make the code more modular and easier to maintain.

src/storage/include/storage/storage.h (7)

124-129: LGTM: New FieldValueTTL struct added

The FieldValueTTL struct is a good addition for handling field-value pairs with TTL. However, as previously suggested, consider adding a constructor to initialize the members.

Consider adding a constructor to initialize the members:

struct FieldValueTTL {
  std::string field;
  std::string value;
  int32_t ttl;
  
  FieldValueTTL() : field(""), value(""), ttl(0) {}
  FieldValueTTL(const std::string& f, const std::string& v, int32_t t) 
    : field(f), value(v), ttl(t) {}
  
  bool operator==(const FieldValueTTL& fv) const {
    return (fv.field == field && fv.value == value && fv.ttl == ttl);
  }
};

178-178: Comment added for default constructor, but access should be restricted

The comment clarifying that the default constructor is for unit tests only is helpful. However, as previously suggested, consider restricting access to this constructor to prevent unintended use in production code.

Consider making the default constructor private or protected:

class Storage {
private:
  Storage();  // for unit test only
public:
  Storage(int db_instance_num, int slot_num, bool is_classic_mode);
  // ... rest of the class
};

1043-1044: LGTM: New PKPatternMatchDelWithRemoveKeys method added

The new method for pattern matching and key removal is a good addition. Its signature is consistent with other methods in the class.

Consider adding a brief comment explaining the purpose of this method, for example:

// Removes keys matching the given pattern, up to max_count.
// Returns the number of keys removed and populates remove_keys with the removed keys.
Status PKPatternMatchDelWithRemoveKeys(const std::string& pattern, int64_t* ret,
                                       std::vector<std::string>* remove_keys, const int64_t& max_count);

1155-1156: LGTM: New EnableAutoCompaction method added

The new EnableAutoCompaction method is a good addition, likely for managing auto-compaction settings. Its signature is consistent with other methods in the class.

Consider adding a brief comment explaining the purpose of this method, for example:

// Enables auto-compaction for the specified option type and database type with the given options.
Status EnableAutoCompaction(const OptionType& option_type, const std::string& db_type,
                            const std::unordered_map<std::string, std::string>& options);

414-457: LGTM: New Pika Hash (PKH) methods added

The addition of these new PKH methods significantly expands the functionality for Pika Hash operations. The methods are well-structured and consistent with the existing code style.

Consider using int64_t instead of int32_t for TTL-related parameters to maintain consistency with other TTL methods in the class and to support larger TTL values. For example:

Status PKHExpire(const Slice& key, int64_t ttl, int32_t numfields, const std::vector<std::string>& fields,
                 std::vector<int32_t>* rets);

Status PKHSetex(const Slice& key, const Slice& field, const Slice& value, int64_t ttl, int32_t* ret);

Please review and update other PKH methods that use TTL values similarly.


490-491: Update set operation methods to use pointers for output parameters

The addition of the value_to_dest parameter to SDiffstore, SInterstore, and SUnionstore is good for providing more information about the operation results. However, for consistency with the codebase's conventions, consider using pointers for output parameters instead of references.

Update the method signatures to use pointers for output parameters:

Status SDiffstore(const Slice& destination, const std::vector<std::string>& keys,
                  std::vector<std::string>* value_to_dest, int32_t* ret);

Status SInterstore(const Slice& destination, const std::vector<std::string>& keys,
                   std::vector<std::string>* value_to_dest, int32_t* ret);

Status SUnionstore(const Slice& destination, const std::vector<std::string>& keys,
                   std::vector<std::string>* value_to_dest, int32_t* ret);

Also applies to: 514-515, 574-575


278-279: Update BitOp method to use pointer for output parameter

The addition of the value_to_dest parameter to BitOp is good for providing more information about the operation results. However, for consistency with the codebase's conventions, consider using a pointer for this output parameter instead of a reference.

Update the method signature to use a pointer for the output parameter:

Status BitOp(BitOpType op, const std::string& dest_key, const std::vector<std::string>& src_keys,
             std::string* value_to_dest, int64_t* ret);
src/storage/src/redis.h (1)

528-529: Address the TODO comment.

The TODO comment indicates that separating the environment for each RocksDB instance is a pending task. Consider creating a GitHub issue to track this enhancement if it's not already planned.

Do you want me to open a GitHub issue to track this task?

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 801f122 and 3f146b6.

📒 Files selected for processing (19)
  • include/acl.h (1 hunks)
  • include/pika_command.h (5 hunks)
  • include/pika_pkhash.h (1 hunks)
  • src/pika_client_conn.cc (1 hunks)
  • src/pika_command.cc (10 hunks)
  • src/pika_pkhash.cc (1 hunks)
  • src/storage/include/storage/storage.h (13 hunks)
  • src/storage/include/storage/storage_define.h (2 hunks)
  • src/storage/src/base_filter.h (6 hunks)
  • src/storage/src/base_value_format.h (5 hunks)
  • src/storage/src/pkhash_data_value_format.h (1 hunks)
  • src/storage/src/redis.cc (12 hunks)
  • src/storage/src/redis.h (14 hunks)
  • src/storage/src/redis_hashes.cc (29 hunks)
  • src/storage/src/redis_pkhashes.cc (1 hunks)
  • src/storage/src/redis_strings.cc (35 hunks)
  • src/storage/src/storage.cc (28 hunks)
  • src/storage/tests/hashes_test.cc (3 hunks)
  • src/storage/tests/pkhashes_test.cc (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (9)
  • include/acl.h
  • src/pika_client_conn.cc
  • src/storage/include/storage/storage_define.h
  • src/storage/src/base_filter.h
  • src/storage/src/base_value_format.h
  • src/storage/src/pkhash_data_value_format.h
  • src/storage/src/redis_hashes.cc
  • src/storage/tests/hashes_test.cc
  • src/storage/tests/pkhashes_test.cc
🧰 Additional context used
🪛 cppcheck
src/storage/src/redis_pkhashes.cc

[performance] 565-565: Searching before insertion is not necessary.

(stlFindInsert)


[performance] 774-774: Searching before insertion is not necessary.

(stlFindInsert)


[performance] 857-857: Searching before insertion is not necessary.

(stlFindInsert)

🔇 Additional comments (123)
include/pika_command.h (2)

141-159: LGTM: New PKHash command constants added.

The new PKHash command constants have been added correctly, following the existing naming conventions. These additions align with the PR objectives of introducing new PKHash commands to the Pika database.


Line range hint 141-314: Overall assessment: Changes are well-integrated and consistent.

The additions for PKHash commands and the new flag are well-structured and consistent with the existing codebase. These changes successfully implement the new PKHash functionality as outlined in the PR objectives.

To ensure full integration:

  1. Verify that the new PKHash commands are properly implemented in their respective files.
  2. Check that the kCmdFlagsPKHash flag is correctly used in command implementations.
  3. Update any relevant documentation to reflect these new PKHash capabilities.
src/storage/src/redis.cc (4)

218-218: LGTM: New column family included in CompactRange

The new 'pika_hash_data_cf' column family has been correctly included in the CompactRange function.


498-498: LGTM: New setter for write WAL options

The new SetWriteWalOptions function provides a way to disable Write-Ahead Logging, which can be useful for certain scenarios where durability can be traded for performance.


167-168: Update column family indices after adding new column family

With the addition of 'pika_hash_data_cf' to the column_families vector, the indices of existing column families in the handles_ array may have shifted. Ensure that all references to handles_ use the correct indices corresponding to the updated order of column families.

#!/bin/bash
# Search for references to handles_ array
rg '\bhandles_\[' src

94-95: Verify the compaction filter factory for the new PK Hash column family

The new PK Hash column family is using the same compaction filter factory as the regular hash column family. This might be intentional, but it's worth verifying if a separate compaction filter is needed for PK Hash operations.

Also applies to: 102-110

src/storage/include/storage/storage.h (3)

26-28: LGTM: New include for PKHash functionality

The addition of the pkhash_data_value_format.h include is appropriate for the new PKH methods being introduced.


164-164: LGTM: New Operation enum added

The Operation enum is a good addition, likely for use with background tasks. The naming is clear and consistent with the existing code style.


Line range hint 1-1173: Overall good additions with some minor improvements needed

The changes to this file, particularly the addition of the PKH (Pika Hash) methods, significantly expand the functionality of the Storage class. The new methods are well-structured and consistent with the existing code style.

However, there are a few areas that could be improved:

  1. Consistency in parameter passing: Consider using pointers instead of references for output parameters throughout the class to maintain consistency.
  2. TTL parameter types: Review the use of int32_t vs int64_t for TTL-related parameters to ensure consistency and support for larger TTL values.
  3. Method naming and comments: Correct the typo in EnableDymayticOptions and consider adding brief comments to new methods to explain their purpose.
  4. Constructor access: Consider restricting access to the default constructor as previously suggested.

Addressing these points will improve the overall consistency and clarity of the code.

src/storage/src/storage.cc (22)

7-7: Approved: Include statement reorganization

The reordering of include statements improves code organization without introducing new dependencies or affecting functionality.

Also applies to: 11-12, 17-17, 20-21


Line range hint 199-217: Approved: Improved MGet implementation

The change to a range-based for loop enhances readability and reduces the risk of index-related errors. This is a good use of modern C++ features.


Line range hint 218-236: Approved: Consistent improvement in MGetWithTTL

This change mirrors the improvement made to the MGet method, using a range-based for loop. This consistency in coding style across similar methods is commendable.


304-308: Approved: Enhanced input validation in BitOp

The addition of a check for the NOT operation with multiple source keys improves the robustness of the BitOp method. This prevents invalid operations and potential errors.


488-595: Approved: Implementation of new Pika Hash commands

The addition of these new Pika Hash commands (PKHExpire, PKHExpireat, PKHExpiretime, PKHPersist, PKHTTL, PKHGet, PKHSet, PKHSetex, PKHExists, PKHDel, PKHLen, PKHStrlen, PKHIncrby, PKHMSet, PKHMSetex, PKHMGet, PKHKeys, PKHVals, PKHGetall, and PKHScan) aligns well with the PR objectives. The implementation consistently delegates to the appropriate database instance, maintaining the existing architecture.


646-647: Approved: Enhanced functionality in set operations

The addition of the value_to_dest parameter to SDiffstore, SInterstore, and SUnionstore methods improves their functionality. This change allows the caller to receive the resulting set values directly, potentially reducing the need for additional queries and improving performance.

Also applies to: 714-715, 829-830


1021-1021: Approved: Improved code formatting

The line breaks added to the ZRangebyscore and ZRevrangebyscore methods improve code readability without affecting functionality. This change helps prevent excessively long lines.

Also applies to: 1070-1071


1210-1211: Approved: Enhanced readability in method signatures

The parameter lists for ZRangebylex, ZLexcount, and ZRemrangebylex methods have been split across multiple lines, improving code readability without affecting functionality. This change helps prevent excessively long lines and makes the method signatures easier to read.

Also applies to: 1217-1218, 1223-1224


1267-1267: Approved: Improved readability in XRead method signature

The addition of a line break in the XRead method signature improves code readability without affecting functionality. This change helps prevent an excessively long line.


1356-1357: Approved: Simplified key_type determination

The use of a ternary operator to determine the key_type based on the dtype simplifies the code without changing its functionality. This change makes the code more concise and easier to read.


1384-1384: Approved: Simplified CreateIterator call

The removal of unnecessary parameters from the CreateIterator call simplifies the code without changing its functionality.


1442-1443: Approved: Consistent simplification of CreateIterator calls

The removal of unnecessary parameters from the CreateIterator calls in PKScanRange and PKRScanRange methods simplifies the code without changing functionality. This change is consistent with the earlier modification to the Scan method, maintaining a uniform approach across the codebase.

Also applies to: 1490-1491


Line range hint 1517-1526: Approved: Enhanced control in PKPatternMatchDelWithRemoveKeys

The addition of the max_count parameter and its usage in PKPatternMatchDelWithRemoveKeys provides better control over the number of keys deleted. The early return when the maximum count is reached improves efficiency by preventing unnecessary processing.


1544-1544: Approved: Consistent simplification of CreateIterator call in Scanx

The removal of unnecessary parameters from the CreateIterator call in the Scanx method simplifies the code without changing functionality. This change maintains consistency with similar modifications made to other methods in the file.


1991-1991: Approved: Improved logging in GetDBByIndex

The change from direct stderr output to LOG(WARNING) in the GetDBByIndex method improves the consistency of logging throughout the codebase. This approach allows for better log management and filtering.


1998-1998: Approved: Enhanced flexibility with db_type parameter

The addition of the db_type parameter to SetOptions, EnableDymayticOptions, and EnableAutoCompaction methods provides more flexibility in handling different database types. This change allows for more specific customization based on the database type.

Also applies to: 2016-2017, 2028-2029


2036-2036: Approved: Consistent handling of Pika Hash in EnableAutoCompaction

The addition of pk_hash_cfhds (Pika Hash column family handles) in the EnableAutoCompaction method ensures that the new Pika Hash data structures are properly managed during auto-compaction. This change is consistent with the Pika Hash functionality added earlier in the file.

Also applies to: 2042-2042


Line range hint 1-2060: Summary of changes in storage.cc

This review covered significant additions and modifications to the Storage class, primarily focusing on new Pika Hash commands and improvements to existing methods. Key points:

  1. New Pika Hash commands (PKHExpire, PKHExpireat, etc.) have been successfully implemented.
  2. Existing methods like MGet, MGetWithTTL, and various set operations have been improved for better readability and efficiency.
  3. Several CreateIterator calls have been simplified consistently across different methods.
  4. Minor improvements in code formatting and logging have been made.

A few issues were identified:

  1. Potential C++11 compatibility issues with std::plus<> usage.
  2. A typo in the method name "EnableDymayticOptions".
  3. Inconsistent parameter passing style in some methods.

Overall, the changes align well with the PR objectives and generally improve the code quality. Addressing the identified issues will further enhance the robustness of the implementation.


2016-2017: ⚠️ Potential issue

Fix typo in method name

There's a typo in the method name "EnableDymayticOptions". It should be "EnableDynamicOptions".

Apply this diff to correct the method name:

-Status Storage::EnableDymayticOptions(const OptionType& option_type, const std::string& db_type,
+Status Storage::EnableDynamicOptions(const OptionType& option_type, const std::string& db_type,
                                       const std::unordered_map<std::string, std::string>& options) {

Likely invalid or redundant comment.


821-823: ⚠️ Potential issue

Fix potential undefined behavior in SUnion

Using std::move_iterator with std::copy to insert elements into an unordered_set can lead to undefined behavior. Moved-from elements may not remain valid for hashing and equality checks required by the set.

Apply this diff to fix the issue:

-      std::copy(std::move_iterator<Iter>(vec.begin()), std::move_iterator<Iter>(vec.end()),
+      std::copy(vec.begin(), vec.end(),
                std::insert_iterator<Uset>(member_set, member_set.begin()));

Likely invalid or redundant comment.


530-533: 🛠️ Refactor suggestion

Consider using int64_t for TTL parameter in PKHSetex

In the PKHSetex method, the TTL parameter is currently an int32_t. To support larger TTL values and maintain consistency with other time-related parameters, consider changing it to int64_t.

Apply this diff:

-Status Storage::PKHSetex(const Slice& key, const Slice& field, const Slice& value, int32_t ttl, int32_t* ret) {
+Status Storage::PKHSetex(const Slice& key, const Slice& field, const Slice& value, int64_t ttl, int32_t* ret) {

Likely invalid or redundant comment.


1398-1398: Verify the impact of changed comparison logic

The condition in the while loop has been modified. Please verify that this change doesn't alter the intended behavior of the loop.

To verify the impact of this change, you can run the following script:

src/pika_pkhash.cc (2)

306-317: 🛠️ Refactor suggestion

Simplify error handling in PKHExistsCmd::Do().

The error handling can be simplified by combining the ok and IsNotFound cases.

Apply this diff:

 void PKHExistsCmd::Do() {
   s_ = db_->storage()->HExists(key_, field_);
-  if (s_.ok()) {
-    res_.AppendContent(":1");
-  } else if (s_.IsInvalidArgument()) {
-    res_.SetRes(CmdRes::kMultiKey);
-  } else if (s_.IsNotFound()) {
-    res_.AppendContent(":0");
+  if (s_.ok() || s_.IsNotFound()) {
+    res_.AppendContent(s_.ok() ? ":1" : ":0");
   } else {
     res_.SetRes(CmdRes::kErrOther, s_.ToString());
   }
 }

Likely invalid or redundant comment.


208-221: 🛠️ Refactor suggestion

Simplify error handling in PKHGetCmd::Do().

The error handling can be simplified by combining the IsInvalidArgument and IsNotFound cases.

Apply this diff:

 void PKHGetCmd::Do() {
   std::string value;
   s_ = db_->storage()->PKHGet(key_, field_, &value);
   if (s_.ok()) {
     res_.AppendStringLenUint64(value.size());
     res_.AppendContent(value);
-  } else if (s_.IsInvalidArgument()) {
-    res_.SetRes(CmdRes::kMultiKey);
-  } else if (s_.IsNotFound()) {
+  } else if (s_.IsNotFound() || s_.IsInvalidArgument()) {
     res_.AppendContent("$-1");
   } else {
     res_.SetRes(CmdRes::kErrOther, s_.ToString());
   }
 }

Likely invalid or redundant comment.

src/storage/src/redis.h (23)

17-19: LGTM!

The added header includes appear relevant and necessary for the new functionality.


24-24: LGTM!

The redis_streams.h header is required for the new stream-related methods.


77-78: ****

The previous review comment about ensuring thread safety for modify_count operations is still applicable. Please address it if not already done.


86-87: LGTM!

The constructor changes are minor formatting improvements and do not alter functionality.


94-94: LGTM!

Adding a const qualifier to GetIndex() is a good practice as it indicates the method does not modify the object's state.


121-121: LGTM!

The new PKHashesExpire method declaration is consistent with the existing Expire methods for other data types.


151-152: LGTM!

The formatting change in the BitOp method signature improves readability without altering functionality.


244-247: ****

The previous review comment about validating bounds and correctness in GetPKHashCFHandles is still applicable. Please address it if not already done.


249-250: LGTM!

The minor formatting change in GetRocksDBInfo is acceptable.


297-298: LGTM!

The formatting change in the SDiffstore method signature is acceptable and improves readability.


300-301: LGTM!

The formatting change in the SInterstore method signature is acceptable and improves readability.


310-311: LGTM!

The formatting change in the SUnionstore method signature is acceptable and improves readability.


375-376: ****

The previous review comment about the potential issue with the rvalue reference parameter in XRange is still applicable. Please address it if not already done.


386-387: ****

The previous review comment about the potential issue with the rvalue reference parameter in GetStreamMeta is still applicable. Please address it if not already done.


407-408: LGTM!

The formatting change in the CreateIterator method signature is acceptable and improves readability.


412-413: LGTM!

The formatting change in the overloaded CreateIterator method signature is acceptable and improves readability.


446-446: LGTM!

The minor formatting change in GetMetaValueType is acceptable.


451-451: LGTM!

The minor formatting change in ExpectedMetaValue is acceptable.


459-459: LGTM!

The minor formatting change in ExpectedStale is acceptable.


464-465: LGTM!

The addition of the kPKHashes case in the ExpectedStale method is consistent with the new PK Hash functionality.


487-487: LGTM!

Making the GenerateStreamID method private is a good practice to encapsulate internal functionality.


540-543: LGTM!

The formatting changes in the GetScanStartPoint and StoreScanNextPoint method signatures are acceptable and improve readability.


252-291: Verify the new PK Hash methods in the codebase.

The addition of the new PK Hash methods (PKHExpire, PKHExpireat, PKHExpiretime, PKHTTL, PKHPersist, PKHGet, PKHSet, PKHSetex, PKHExists, PKHDel, PKHLen, PKHStrlen, PKHIncrby, PKHMSet, PKHMSetex, PKHMGet, PKHKeys, PKHVals, PKHGetall, PKHScan) extends the functionality of the Redis class.

Please ensure that:

  • These methods are implemented correctly in the corresponding source files.
  • The methods are used appropriately throughout the codebase.
  • Relevant tests are added to verify the functionality of these methods.

Run the following script to check the usage of these new methods in the codebase:

✅ Verification successful

PK Hash Methods Verified Successfully

All new PK Hash methods are properly implemented, utilized throughout the codebase, and adequately tested.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for usage of new PK Hash methods

# List of new PK Hash method names
methods=(
  "PKHExpire" "PKHExpireat" "PKHExpiretime" "PKHTTL" "PKHPersist"
  "PKHGet" "PKHSet" "PKHSetex" "PKHExists" "PKHDel" "PKHLen"
  "PKHStrlen" "PKHIncrby" "PKHMSet" "PKHMSetex" "PKHMGet"
  "PKHKeys" "PKHVals" "PKHGetall" "PKHScan"
)

# Search for usage of each method
for method in "${methods[@]}"; do
  echo "Checking usage of $method:"
  rg "\b${method}\s*\(" --type cpp
  echo
done

Length of output: 34819

src/storage/src/redis_strings.cc (1)

8-8: 🛠️ Refactor suggestion

Remove unnecessary inclusion of <iostream>

The header <iostream> is not used in this file and can be safely removed to reduce compilation time and prevent potential namespace pollution.

Apply this diff to remove the unnecessary include:

-#include <iostream>

Likely invalid or redundant comment.

src/pika_command.cc (66)

19-19: LGTM!

The new include statement for pika_pkhash.h is correctly added.


159-161: LGTM!

The clearcacheptr command is correctly initialized with the appropriate flags.


162-164: LGTM!

The lastsaveptr command is correctly initialized with the appropriate flags.


247-249: LGTM!

The setptr command is correctly initialized with the appropriate flags.


251-254: LGTM!

The getptr command is correctly initialized with the appropriate flags.


256-259: LGTM!

The delptr command is correctly initialized with the appropriate flags.


264-266: LGTM!

The incrptr command is correctly initialized with the appropriate flags.


269-270: LGTM!

The incrbyptr command is correctly initialized with the appropriate flags.


273-275: LGTM!

The incrbyfloatptr command is correctly initialized with the appropriate flags.


277-279: LGTM!

The decrptr command is correctly initialized with the appropriate flags.


282-283: LGTM!

The decrbyptr command is correctly initialized with the appropriate flags.


286-287: LGTM!

The getsetptr command is correctly initialized with the appropriate flags.


290-291: LGTM!

The appendptr command is correctly initialized with the appropriate flags.


293-296: LGTM!

The mgetptr command is correctly initialized with the appropriate flags.


303-304: LGTM!

The setnxptr command is correctly initialized with the appropriate flags.


306-308: LGTM!

The setexptr command is correctly initialized with the appropriate flags.


310-312: LGTM!

The psetexptr command is correctly initialized with the appropriate flags.


315-316: LGTM!

The delvxptr command is correctly initialized with the appropriate flags.


318-320: LGTM!

The msetptr command is correctly initialized with the appropriate flags.


322-324: LGTM!

The msetnxptr command is correctly initialized with the appropriate flags.


327-329: LGTM!

The getrangeptr command is correctly initialized with the appropriate flags.


332-333: LGTM!

The setrangeptr command is correctly initialized with the appropriate flags.


335-338: LGTM!

The strlenptr command is correctly initialized with the appropriate flags.


340-343: LGTM!

The existsptr command is correctly initialized with the appropriate flags.


346-348: LGTM!

The expireptr command is correctly initialized with the appropriate flags.


351-353: LGTM!

The pexpireptr command is correctly initialized with the appropriate flags.


355-358: LGTM!

The expireatptr command is correctly initialized with the appropriate flags.


360-363: LGTM!

The pexpireatptr command is correctly initialized with the appropriate flags.


365-367: LGTM!

The ttlptr command is correctly initialized with the appropriate flags.


369-371: LGTM!

The pttlptr command is correctly initialized with the appropriate flags.


373-376: LGTM!

The persistptr command is correctly initialized with the appropriate flags.


378-380: LGTM!

The typeptr command is correctly initialized with the appropriate flags.


391-392: LGTM!

The pksetexatptr command is correctly initialized with the appropriate flags.


394-396: LGTM!

The pkscanrangeptr command is correctly initialized with the appropriate flags.


398-400: LGTM!

The pkrscanrangeptr command is correctly initialized with the appropriate flags.


404-406: LGTM!

The hdelptr command is correctly initialized with the appropriate flags.


408-410: LGTM!

The hsetptr command is correctly initialized with the appropriate flags.


412-415: LGTM!

The hgetptr command is correctly initialized with the appropriate flags.


417-420: LGTM!

The hgetallptr command is correctly initialized with the appropriate flags.


422-425: LGTM!

The hexistsptr command is correctly initialized with the appropriate flags.


427-429: LGTM!

The hincrbyptr command is correctly initialized with the appropriate flags.


431-434: LGTM!

The hincrbyfloatptr command is correctly initialized with the appropriate flags.


436-439: LGTM!

The hkeysptr command is correctly initialized with the appropriate flags.


441-444: LGTM!

The hlenptr command is correctly initialized with the appropriate flags.


446-449: LGTM!

The hmgetptr command is correctly initialized with the appropriate flags.


451-453: LGTM!

The hmsetptr command is correctly initialized with the appropriate flags.


455-457: LGTM!

The hsetnxptr command is correctly initialized with the appropriate flags.


459-462: LGTM!

The hstrlenptr command is correctly initialized with the appropriate flags.


464-467: LGTM!

The hvalsptr command is correctly initialized with the appropriate flags.


469-471: LGTM!

The hscanptr command is correctly initialized with the appropriate flags.


473-475: LGTM!

The hscanxptr command is correctly initialized with the appropriate flags.


477-479: LGTM!

The pkhscanrangeptr command is correctly initialized with the appropriate flags.


481-483: LGTM!

The pkhrscanrangeptr command is correctly initialized with the appropriate flags.


485-488: LGTM!

The ehsetptr command is correctly initialized with the appropriate flags.


490-493: LGTM!

The ehexpireptr command is correctly initialized with the appropriate flags.


495-497: LGTM!

The ehexpireatptr command is correctly initialized with the appropriate flags.


507-510: LGTM!

The ehpersistptr command is correctly initialized with the appropriate flags.


511-514: LGTM!

The ehgetptr command is correctly initialized with the appropriate flags.


516-518: LGTM!

The ehsetexptr command is correctly initialized with the appropriate flags.


520-522: LGTM!

The ehexistsptr command is correctly initialized with the appropriate flags.


524-526: LGTM!

The ehdelptr command is correctly initialized with the appropriate flags.


528-530: LGTM!

The ehlenptr command is correctly initialized with the appropriate flags.


532-534: LGTM!

The ehstrlenptr command is correctly initialized with the appropriate flags.


536-538: LGTM!

The ehincrbyptr command is correctly initialize


502-505: ⚠️ Potential issue

Correct command flags for PKHTTLCmd

The PKHTTL command is intended to return the time-to-live for a hash key, which is a read operation. However, it is currently flagged as a write command. Please change the flags to use kCmdFlagsRead to accurately reflect its functionality.

-  std::unique_ptr<Cmd> ehttlptr = std::make_unique<PKHTTLCmd>(
-      kCmdNamePKHTTL, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
+  std::unique_ptr<Cmd> ehttlptr = std::make_unique<PKHTTLCmd>(
+      kCmdNamePKHTTL, -4, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);

Likely invalid or redundant comment.


498-501: ⚠️ Potential issue

Correct command flags for PKHExpiretimeCmd

The PKHExpiretime command retrieves the expiration time of a hash key, which is a read operation. Currently, it is flagged as a write command. Please change the flags to use kCmdFlagsRead to accurately reflect its functionality.

Apply this diff:

-  std::unique_ptr<Cmd> ehexpiretimeptr = std::make_unique<PKHExpiretimeCmd>(
-      kCmdNamePKHExpiretime, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
+  std::unique_ptr<Cmd> ehexpiretimeptr = std::make_unique<PKHExpiretimeCmd>(
+      kCmdNamePKHExpiretime, -4, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);

Likely invalid or redundant comment.

Comment on lines +257 to +278
class PKHLenCmd : public Cmd {
public:
PKHLenCmd(const std::string& name, int arity, uint32_t flag)
: Cmd(name, arity, flag, static_cast<uint32_t>(AclCategory::PKHASH)) {}
std::vector<std::string> current_key() const override {
std::vector<std::string> res;
res.push_back(key_);
return res;
}
void Do() override;
void DoThroughDB() override;
void DoUpdateCache() override;
void Split(const HintKeys& hint_keys) override {};
void Merge() override {};
Cmd* Clone() override { return new PKHLenCmd(*this); }

private:
std::string key_;
bool is_force_;
void DoInitial() override;
rocksdb::Status s_;
};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Initialize the is_force_ member variable.

The is_force_ member variable is declared but not initialized. This may lead to undefined behavior if it's used before being assigned a value. Consider initializing it in the constructor or implementing a Clear() method to reset the internal state.

Comment on lines +416 to +437
class PKHGetAllCmd : public Cmd {
public:
PKHGetAllCmd(const std::string& name, int arity, uint32_t flag)
: Cmd(name, arity, flag, static_cast<uint32_t>(AclCategory::PKHASH)) {}
std::vector<std::string> current_key() const override {
std::vector<std::string> res;
res.push_back(key_);
return res;
}
void Do() override;
void DoThroughDB() override;
void DoUpdateCache() override;
void Split(const HintKeys& hint_keys) override {};
void Merge() override {};
Cmd* Clone() override { return new PKHGetAllCmd(*this); }

private:
std::string key_;
bool is_wt_;
void DoInitial() override;
rocksdb::Status s_;
};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Initialize the is_wt_ member variable.

The is_wt_ member variable is declared but not initialized. This may lead to undefined behavior if it's used before being assigned a value. Consider initializing it in the constructor or implementing a Clear() method to reset the internal state.

@@ -290,7 +310,8 @@ enum CmdFlags {
kCmdFlagsOperateKey = (1 << 19), // redis keySpace
kCmdFlagsStream = (1 << 20),
kCmdFlagsFast = (1 << 21),
kCmdFlagsSlow = (1 << 22)
kCmdFlagsSlow = (1 << 22),
kCmdFlagsPKHash = (1 << 23),
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

Flag Inconsistencies Found: Missing kCmdFlagsPKHash

The following files are missing the kCmdFlagsPKHash flag where it should be applied:

  • tools/pika-port/pika_port_3/pika_command.h
  • include/pika_hyperloglog.h
  • src/pika_zset.cc
  • include/pika_set.h
  • include/pika_kv.h
  • include/pika_bit.h
  • src/pika_list.h
  • src/pika_cmd_table_manager.cc
  • src/acl.cc

Please ensure that kCmdFlagsPKHash is consistently added to all relevant command definitions to maintain proper functionality.

🔗 Analysis chain

LGTM: New PKHash flag added.

The kCmdFlagsPKHash flag has been added to the CmdFlags enum, which is consistent with the addition of new PKHash commands. This flag will allow for proper identification and handling of PKHash commands in the system.

To ensure the new flag is being utilized correctly, please run the following script to check for its usage across the codebase:

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the usage of the new kCmdFlagsPKHash flag

# Search for uses of the new flag
echo "Searching for uses of kCmdFlagsPKHash:"
rg "kCmdFlagsPKHash" --type cpp

# Search for potential places where the flag should be used
echo "Potential places where kCmdFlagsPKHash should be used:"
rg "kCmdFlags(Kv|Hash|List|Set|Zset|Bit|HyperLogLog|Geo|PubSub|Stream)" --type cpp

Length of output: 23556


Script:

#!/bin/bash
# Description: Verify that kCmdFlagsPKHash is used in all relevant command definitions

# List of potential flag definitions where kCmdFlagsPKHash should be applied
potential_flags=(
  "kCmdFlagsKv"
  "kCmdFlagsHash"
  "kCmdFlagsList"
  "kCmdFlagsSet"
  "kCmdFlagsZset"
  "kCmdFlagsBit"
  "kCmdFlagsHyperLogLog"
  "kCmdFlagsGeo"
  "kCmdFlagsPubSub"
  "kCmdFlagsStream"
)

echo "Verifying usage of kCmdFlagsPKHash in relevant command definitions:"

for flag in "${potential_flags[@]}"; do
  echo "Checking for commands using ${flag}:"
  rg "${flag}" --type cpp | while read -r line; do
    file=$(echo "$line" | cut -d':' -f1)
    if ! grep -q "kCmdFlagsPKHash" "$file"; then
      echo "Missing kCmdFlagsPKHash in $file"
    fi
  done
done

Length of output: 47799

}
// column family stats
std::map<std::string, std::string> mapvalues;
db_->rocksdb::DB::GetMapProperty(rocksdb::DB::Properties::kCFStats, &mapvalues);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix method invocation of 'GetMapProperty'

The method call to GetMapProperty incorrectly qualifies the method with rocksdb::DB::. Since db_ is a pointer to rocksdb::DB, the correct invocation should be:

db_->GetMapProperty(rocksdb::DB::Properties::kCFStats, &mapvalues);

Apply this diff to fix the method call:

-  db_->rocksdb::DB::GetMapProperty(rocksdb::DB::Properties::kCFStats, &mapvalues);
+  db_->GetMapProperty(rocksdb::DB::Properties::kCFStats, &mapvalues);

@@ -952,7 +1005,7 @@ class Storage {
Status XLen(const Slice& key, int32_t& len);
Status XRead(const StreamReadGroupReadArgs& args, std::vector<std::vector<storage::IdMessage>>& results,
std::vector<std::string>& reserved_keys);
Status XInfo(const Slice& key, StreamInfoResult &result);
Status XInfo(const Slice& key, StreamInfoResult& result);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Revert change and use pointer for output parameter in XInfo

The change to pass result by value goes against the existing comment and the general practice in this codebase. It's better to use a pointer for output parameters for consistency and efficiency.

Please revert this change and use a pointer as suggested:

Status XInfo(const Slice& key, StreamInfoResult* result);

Comment on lines +706 to +721
version = parsed_hashes_meta_value.UpdateVersion();
parsed_hashes_meta_value.SetCount(1);
parsed_hashes_meta_value.SetEtime(0);
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Simplify version and count updates

In the PKHIncrby function, when the hash is stale or empty, the code updates the version and resets the count. However, resetting the count and expiration time is unnecessary because UpdateVersion() and InitialMetaValue() already handle these.

Apply this diff to remove redundant code:

 if (parsed_hashes_meta_value.IsStale() || parsed_hashes_meta_value.Count() == 0) {
   version = parsed_hashes_meta_value.UpdateVersion();
-  parsed_hashes_meta_value.SetCount(1);
-  parsed_hashes_meta_value.SetEtime(0);
   batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value);
   HashesDataKey hashes_data_key(key, version, field);

Committable suggestion was skipped due to low confidence.

Comment on lines +1157 to +1175
int64_t rest = count;
int64_t step_length = count;
rocksdb::ReadOptions read_options;
const rocksdb::Snapshot* snapshot;

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Validate count parameter to prevent unexpected behavior

In the PKHScan function, variables rest and step_length are initialized with count, but there is no validation to ensure count is positive. A non-positive count could lead to unexpected behavior.

Add validation for the count parameter:

 if (count <= 0) {
   *next_cursor = 0;
   return Status::OK();
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
int64_t rest = count;
int64_t step_length = count;
rocksdb::ReadOptions read_options;
const rocksdb::Snapshot* snapshot;
if (count <= 0) {
*next_cursor = 0;
return Status::OK();
}
int64_t rest = count;
int64_t step_length = count;
rocksdb::ReadOptions read_options;
const rocksdb::Snapshot* snapshot;

Comment on lines +1174 to +1190
", get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]);
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Handle unexpected errors from GetScanStartPoint

After retrieving the scan start point with GetScanStartPoint, the current code only checks for IsNotFound(). Other errors are not handled, which might lead to unintended behavior.

Adjust the error handling to catch all unexpected errors:

 s = GetScanStartPoint(DataType::kPKHashes, key, pattern, cursor, &start_point);
- if (s.IsNotFound()) {
+ if (!s.ok()) {
+   if (s.IsNotFound()) {
      cursor = 0;
      if (isTailWildcard(pattern)) {
        start_point = pattern.substr(0, pattern.size() - 1);
      }
+   } else {
+     return s;
+   }
 }

Committable suggestion was skipped due to low confidence.

Comment on lines 190 to 208
for (const auto& field : fields) {
HashesDataKey data_key(key, version, field);
std::string data_value;
s = db_->Get(default_read_options_, handles_[kPKHashDataCF], data_key.Encode(), &data_value);
if (s.ok()) {
ParsedPKHashDataValue parsed_internal_value(&data_value);
if (parsed_internal_value.IsStale()) {
rets->push_back(-2);
} else {
rets->push_back(1);
parsed_internal_value.SetRelativeTimestamp(ttl);
batch.Put(handles_[kPKHashDataCF], data_key.Encode(), data_value);
}
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Handle potential errors when setting field expirations

In the PKHExpire method, the loop over the fields vector does not handle cases where db_->Get returns an error other than Status::NotFound. If Get fails due to a temporary error, the function may proceed incorrectly.

Apply this diff to handle potential errors:

 for (const auto& field : fields) {
   HashesDataKey data_key(key, version, field);
   std::string data_value;
   s = db_->Get(default_read_options_, handles_[kPKHashDataCF], data_key.Encode(), &data_value);
-  if (s.ok()) {
+  if (!s.ok()) {
+    if (s.IsNotFound()) {
+      rets->push_back(-2);
+      continue;
+    } else {
+      return s;
+    }
+  }
+  ParsedPKHashDataValue parsed_internal_value(&data_value);
+  if (parsed_internal_value.IsStale()) {
     rets->push_back(-2);
   } else {
     rets->push_back(1);
     parsed_internal_value.SetRelativeTimestamp(ttl);
     batch.Put(handles_[kPKHashDataCF], data_key.Encode(), data_value);
   }
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
for (const auto& field : fields) {
HashesDataKey data_key(key, version, field);
std::string data_value;
s = db_->Get(default_read_options_, handles_[kPKHashDataCF], data_key.Encode(), &data_value);
if (s.ok()) {
ParsedPKHashDataValue parsed_internal_value(&data_value);
if (parsed_internal_value.IsStale()) {
rets->push_back(-2);
} else {
rets->push_back(1);
parsed_internal_value.SetRelativeTimestamp(ttl);
batch.Put(handles_[kPKHashDataCF], data_key.Encode(), data_value);
}
}
}
for (const auto& field : fields) {
HashesDataKey data_key(key, version, field);
std::string data_value;
s = db_->Get(default_read_options_, handles_[kPKHashDataCF], data_key.Encode(), &data_value);
if (!s.ok()) {
if (s.IsNotFound()) {
rets->push_back(-2);
continue;
} else {
return s;
}
}
ParsedPKHashDataValue parsed_internal_value(&data_value);
if (parsed_internal_value.IsStale()) {
rets->push_back(-2);
} else {
rets->push_back(1);
parsed_internal_value.SetRelativeTimestamp(ttl);
batch.Put(handles_[kPKHashDataCF], data_key.Encode(), data_value);
}
}

Comment on lines +1480 to +1482
LOG(INFO) << fmt::format("[key : {:<30}] [value : {:<30}] [timestamp : {:<10}] [version : {}] [survival_time : {}]",
parsed_strings_key.Key().ToString(), parsed_strings_value.UserValue().ToString(),
parsed_strings_value.Etime(), parsed_strings_value.Version(), survival_time);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Avoid logging sensitive information

The LOG(INFO) statement logs key and value data, which may contain sensitive information. Logging such data could lead to potential PII leakage. Consider removing or masking sensitive information before logging.

Apply this diff to sanitize the log output:

 LOG(INFO) << fmt::format("[key : {:<30}] [value : {:<30}] [timestamp : {:<10}] [version : {}] [survival_time : {}]",
-                         parsed_strings_key.Key().ToString(), parsed_strings_value.UserValue().ToString(),
+                         parsed_strings_key.Key().ToString(), "<redacted>",
                          parsed_strings_value.Etime(), parsed_strings_value.Version(), survival_time);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
LOG(INFO) << fmt::format("[key : {:<30}] [value : {:<30}] [timestamp : {:<10}] [version : {}] [survival_time : {}]",
parsed_strings_key.Key().ToString(), parsed_strings_value.UserValue().ToString(),
parsed_strings_value.Etime(), parsed_strings_value.Version(), survival_time);
LOG(INFO) << fmt::format("[key : {:<30}] [value : {:<30}] [timestamp : {:<10}] [version : {}] [survival_time : {}]",
parsed_strings_key.Key().ToString(), "<redacted>",
parsed_strings_value.Etime(), parsed_strings_value.Version(), survival_time);

1. pkhget pkhset

2. pkhexpire pkhexpireat

3. pkhexpiretime pkhpersist pkhttl

4. add  new test cases for pkhash cmd below

5. PKHSetex PKHExists PKHDel PKHLen PKHStrlen

6. PKHIncrby PKHMSet PKHMSetex PKHMGet PKHKeys

7. PKHVals PKHGetall PKHScan
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 20

🧹 Outside diff range and nitpick comments (7)
src/storage/src/base_value_format.h (1)

150-164: Add documentation for clarity.

Consider adding documentation to:

  1. Explain the purpose and contract of IsValid()
  2. Document the purpose of reserve_ array or remove if unused

Apply this diff:

-  virtual bool IsValid() { return !IsStale(); }
+  // Returns true if the value is still valid (not expired)
+  virtual bool IsValid() { return !IsStale(); }

   virtual void StripSuffix() = 0;

 protected:
   virtual void SetVersionToValue() = 0;
   virtual void SetEtimeToValue() = 0;
   virtual void SetCtimeToValue() = 0;
   std::string* value_ = nullptr;
   rocksdb::Slice user_value_;
   uint64_t version_ = 0;
   uint64_t ctime_ = 0;
   uint64_t etime_ = 0;
   DataType type_;
-  char reserve_[16] = {0};  // unused
+  // Reserved for future use to maintain ABI compatibility
+  char reserve_[16] = {0};
src/storage/src/redis.h (1)

528-529: Consider tracking the TODO in the issue tracker.

The TODO comment about separating env for each rocksdb instance suggests a potential improvement for better isolation.

Would you like me to create a GitHub issue to track this TODO item?

src/storage/include/storage/storage.h (1)

419-463: Add documentation for PKHash methods

The new PKHash methods lack documentation explaining their behavior, parameters, and return values. Consider adding detailed documentation similar to other methods in the file.

Example documentation format:

// Sets the specified fields' expiration time in the hash stored at key.
// If key does not exist, returns an error.
// Returns the number of fields that were updated in rets.
Status PKHExpire(const Slice& key, int64_t ttl, int32_t numfields, 
                const std::vector<std::string>& fields,
                std::vector<int32_t>* rets);
src/storage/tests/pkhashes_test.cc (2)

582-582: Translate or remove non-English comment

The comment // 从这里开始有问题。 is not in English. To maintain consistency and readability, please translate it to English or remove it.


628-628: Correct grammatical error in comment

The comment // hset after string type key expires, should success has a grammatical error. It should read should succeed instead of should success.

src/storage/src/storage.cc (1)

Line range hint 218-226: Initialize ttl_millsec before use to prevent undefined behavior

In the MGetWithTTL function, the variable ttl_millsec may be used uninitialized when inst->MGetWithTTL returns NotFound(). This could lead to undefined behavior when adding it to vss. Ensure that ttl_millsec is properly initialized before it's used.

Apply this diff to initialize ttl_millsec before calling MGetWithTTL:

 for (const auto& key : keys) {
   auto& inst = GetDBInstance(key);
   std::string value;
+  int64_t ttl_millsec = -1;
   s = inst->MGetWithTTL(key, &value, &ttl_millsec);
   if (s.ok()) {
     vss->push_back({value, Status::OK(), ttl_millsec});
   } else if (s.IsNotFound()) {
     vss->push_back({std::string(), Status::NotFound(), ttl_millsec});
   } else {
     vss->clear();
     return s;
   }
 }
src/pika_command.cc (1)

966-966: Correct the grammatical error in the comment

Please update the comment on line 966 for clarity:

- Clear();  // Clear cmd, Derived class can has own implement
+ Clear();  // Clear cmd; derived classes can have their own implementation
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 3f146b6 and d690461.

📒 Files selected for processing (20)
  • CMakeLists.txt (1 hunks)
  • include/acl.h (1 hunks)
  • include/pika_command.h (5 hunks)
  • include/pika_pkhash.h (1 hunks)
  • src/pika_client_conn.cc (1 hunks)
  • src/pika_command.cc (10 hunks)
  • src/pika_pkhash.cc (1 hunks)
  • src/storage/include/storage/storage.h (18 hunks)
  • src/storage/include/storage/storage_define.h (2 hunks)
  • src/storage/src/base_filter.h (6 hunks)
  • src/storage/src/base_value_format.h (5 hunks)
  • src/storage/src/pkhash_data_value_format.h (1 hunks)
  • src/storage/src/redis.cc (12 hunks)
  • src/storage/src/redis.h (14 hunks)
  • src/storage/src/redis_hashes.cc (29 hunks)
  • src/storage/src/redis_pkhashes.cc (1 hunks)
  • src/storage/src/redis_strings.cc (35 hunks)
  • src/storage/src/storage.cc (28 hunks)
  • src/storage/tests/hashes_test.cc (3 hunks)
  • src/storage/tests/pkhashes_test.cc (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (9)
  • include/acl.h
  • src/pika_client_conn.cc
  • src/storage/include/storage/storage_define.h
  • src/storage/src/base_filter.h
  • src/storage/src/pkhash_data_value_format.h
  • src/storage/src/redis.cc
  • src/storage/src/redis_hashes.cc
  • src/storage/src/redis_strings.cc
  • src/storage/tests/hashes_test.cc
🧰 Additional context used
🪛 cppcheck
src/storage/src/redis_pkhashes.cc

[performance] 577-577: Searching before insertion is not necessary.

(stlFindInsert)


[performance] 786-786: Searching before insertion is not necessary.

(stlFindInsert)


[performance] 869-869: Searching before insertion is not necessary.

(stlFindInsert)

🔇 Additional comments (20)
src/storage/src/base_value_format.h (2)

52-57: LGTM! Good improvements to the class structure.

The changes enhance the class design:

  • Virtual destructor is correctly added for the base class
  • Constructor initialization list is properly implemented
  • Protected section formatting improves readability

21-31: ⚠️ Potential issue

Update DataTypeNum and document tag mappings.

Several issues need attention:

  1. DataTypeNum should be set to kAll (8) instead of kNones (7) to reflect the actual number of valid data types.
  2. The tag mappings in DataTypeTag should be documented to explain the relationship between tags and enum values.

Apply this diff:

enum class DataType : uint8_t {
  kStrings = 0,   // 'k' - key-value strings
  kHashes = 1,    // 'h' - hashes
  kSets = 2,      // 's' - sets
  kLists = 3,     // 'l' - lists
  kZSets = 4,     // 'z' - sorted sets
  kStreams = 5,   // 'x' - streams
  kPKHashes = 6,  // 'p' - persistent key hashes
  kNones = 7,     // 'n' - none
  kAll = 8,       // 'a' - all types
};
-constexpr int DataTypeNum = int(DataType::kNones);
+constexpr int DataTypeNum = int(DataType::kAll);

-constexpr char DataTypeTag[] = {'k', 'h', 's', 'l', 'z', 'x', 'e', 'n', 'a'};
+constexpr char DataTypeTag[] = {'k', 'h', 's', 'l', 'z', 'x', 'p', 'n', 'a'};

Let's verify the tag usage across the codebase:

Also applies to: 34-35

✅ Verification successful

Let me analyze the findings from the shell script results:

  1. DataTypeTag is used consistently across the codebase in multiple files:

    • src/storage/src/redis.h
    • src/storage/src/redis.cc
    • src/storage/src/storage.cc
    • src/storage/src/base_value_format.h
  2. The tag 'e' for kPKHashes is used in key construction, cursor management, and type identification throughout the codebase.

Let me verify if changing it to 'p' would cause conflicts:


Let me analyze the findings from the shell script results:

  1. The search for 'p' usage shows it's used in various contexts:

    • As a parameter name in function arguments
    • As a variable name in loops
    • As a time format specifier for AM/PM in Highcharts
    • As a HTML tag in jQuery selectors
    • As a URL parameter in Angular
  2. Most importantly, there are no occurrences where 'p' is used as a data type tag in the storage system, which means changing 'e' to 'p' for kPKHashes would not conflict with any existing usage.

Let me verify the data type tag usage more specifically:


Based on the shell script results, I can now provide a complete analysis:

The review comment is correct and should be implemented.

  1. The DataTypeTag array is used consistently throughout the storage system to identify data types in:

    • Key construction for cursors (storage.cc)
    • Iterator creation (redis.h)
    • Key lookup operations (redis.cc)
    • Statistics tracking (redis.cc)
  2. The tag 'e' for kPKHashes should be changed to 'p' because:

    • It's more semantically meaningful ('p' for "persistent key hashes")
    • The tag 'p' is not used for any other data type in the storage system
    • The change is consistent with the string representation "pkhash" in DataTypeStrings
    • No conflicts were found in the codebase where 'p' is used as a data type tag
  3. The DataTypeNum should be updated to kAll since it represents the maximum valid data type value.

The suggested changes in the review comment are valid and improve code clarity and consistency.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for DataTypeTag usage to ensure consistency
rg "DataTypeTag" -A 5

Length of output: 5639


Script:

#!/bin/bash
# Search for any usage of 'p' as a type identifier
rg "'p'|\"p\"" -A 2

Length of output: 179611


Script:

#!/bin/bash
# Search for data type tag usage in storage files
rg "DataTypeTag" src/storage/src/ -A 2 -B 2

Length of output: 4961

include/pika_pkhash.h (1)

439-467: Consider adding bounds checking for cursor and count.

The PKHScanCmd should validate:

  1. Cursor value is non-negative
  2. Count value is positive

Let's verify if there are any existing bounds checks in the implementation:

include/pika_command.h (2)

140-160: LGTM: PKHash command constants are well-defined.

The new PKHash command constants follow consistent naming patterns and provide a comprehensive set of operations including the required expiration functionality.


314-314: Verify PKHash flag usage across the codebase.

The kCmdFlagsPKHash flag has been added, but ensure it's properly applied to all PKHash commands in their implementations.

✅ Verification successful

PKHash flag is correctly applied to all PKHash commands

The verification shows that kCmdFlagsPKHash is properly used in all PKHash command constructors in src/pika_command.cc. The flag is consistently applied along with other appropriate flags (read/write, throughDB, fast/slow) for each PKHash command implementation including:

  • Write commands: PKHSet, PKHExpire, PKHExpireat, PKHSetex, PKHDel, PKHIncrby, PKHMSet
  • Read commands: PKHGet, PKHExists, PKHLen, PKHStrlen, PKHMGet, PKHKeys
  • Slow operations: PKHVals, PKHGetall, PKHScan
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify PKHash flag usage in command implementations

# Search for PKHash command class definitions
echo "Checking PKHash command implementations for flag usage:"
ast-grep --pattern 'class PKH$$$Cmd : public Cmd {
  $$$
  PKH$$$Cmd() : Cmd($$$) {
    $$$
  }
  $$$
}'

# Search for PKHash flag usage in command constructors
echo "Checking for kCmdFlagsPKHash usage in constructors:"
rg "kCmdFlagsPKHash" --type cpp

Length of output: 2727

src/storage/src/redis.h (3)

151-152: LGTM! Method signatures reformatted for better readability.

The changes improve code consistency and readability without affecting functionality.

Also applies to: 297-298, 300-301, 310-311


464-465: LGTM! PKHashes correctly added to stale check logic.

The PKHashes case is appropriately handled in the ExpectedStale method, following the same pattern as regular Hashes.


251-291: Well-structured implementation of PK Hash commands.

The new PK Hash commands are well-organized and follow consistent patterns with other Redis data type implementations. The comprehensive set of commands (PKHExpire, PKHGet, PKHSet, etc.) aligns perfectly with the PR objectives.

Let's verify the consistency of PK Hash command implementations:

✅ Verification successful

PK Hash commands implementation is consistent and well-tested

The verification confirms that:

  • TTL handling is consistent across PKHSetex and PKHMSetex implementations with proper validation
  • Error handling follows a uniform pattern across all PK Hash commands
  • Comprehensive test coverage exists for TTL-related operations (PKHExpire, PKHExpireat, PKHSetex)
  • The implementation maintains atomicity through proper locking mechanisms
  • Command implementations follow consistent patterns for argument validation and error responses

The code structure and implementation patterns align perfectly with the PR objectives and maintain consistency with other Redis data type implementations.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify PK Hash command implementations follow consistent patterns

# Check for consistent error handling patterns
rg "Status PKH\w+\s*\([^)]+\)\s*{" --type cpp -A 5

# Check for consistent TTL handling
rg "PKH\w+ex\s*\([^)]+ttl" --type cpp -A 5

Length of output: 2512


Script:

#!/bin/bash
# Let's fix the regex and check for consistent error handling patterns
rg "Status PKH.*?\{" --type cpp -A 5

# Let's also check the implementation of PKHMSetex to verify TTL handling consistency
rg "PKHMSetex.*?\{" --type cpp -A 10

Length of output: 2582


Script:

#!/bin/bash
# Let's check the implementation of PKHSetex to compare with PKHMSetex
rg "PKHSetex.*?\{" --type cpp -A 10

# Also check PKHExpire implementation for TTL handling
rg "PKHExpire.*?\{" --type cpp -A 10

Length of output: 16928

src/storage/include/storage/storage.h (2)

112-114: LGTM: ValueStatus struct changes

The addition of ttl_millsec and its inclusion in operator== is well implemented and consistent with the TTL functionality.


421-424: ⚠️ Potential issue

Use int64_t consistently for TTL parameters

For consistency and to support larger TTL values, use int64_t instead of int32_t for TTL parameters.

-Status PKHExpire(const Slice& key, int32_t ttl, int32_t numfields, const std::vector<std::string>& fields,
+Status PKHExpire(const Slice& key, int64_t ttl, int32_t numfields, const std::vector<std::string>& fields,
                   std::vector<int32_t>* rets);

Likely invalid or redundant comment.

src/storage/src/redis_pkhashes.cc (4)

23-64: Function PKHGet implemented correctly

The PKHGet function properly retrieves the value of a specified field in a hash, with appropriate handling of stale entries and type checks.


66-152: Function PKHSet implemented correctly

The PKHSet function correctly sets the value of a hash field, managing metadata updates and handling both existing and new fields appropriately.


155-217: Function PKHExpire implemented correctly

The PKHExpire function accurately sets expiration times for specified hash fields, with proper input validation and error handling.


473-563: Function PKHSetex implemented correctly

The PKHSetex function effectively sets the value and expiration time for a hash field, ensuring TTL is applied correctly.

src/storage/tests/pkhashes_test.cc (4)

27-33: ⚠️ Potential issue

Check the return status of db.Open in SetUp()

The return status of db.Open(storage_options, path); is assigned to s, but there is no check to verify if the operation was successful. Ignoring the status may lead to unexpected behavior if the database fails to open.

Apply this diff to add an assertion to ensure the database opens successfully:

 s = db.Open(storage_options, path);
+ASSERT_TRUE(s.ok());

Likely invalid or redundant comment.


872-877: ⚠️ Potential issue

Use IsInvalidArgument instead of IsCorruption in PKHIncrby

When incrementing a field with invalid numeric data, you check s.IsCorruption(). It's more appropriate to check for s.IsInvalidArgument() when handling invalid input.

Apply this change:

 ASSERT_TRUE(s.IsCorruption());
+ASSERT_TRUE(s.IsInvalidArgument());

Likely invalid or redundant comment.


95-103: ⚠️ Potential issue

Remove unused variable and correct error checking in make_expired

In the make_expired function, the variable type_status is declared but not used effectively. The condition !type_status[storage::DataType::kHashes].ok() is invalid because type_status is empty, leading to improper error handling.

Apply this diff to remove the unused variable and correct the error checking:

-static std::map<storage::DataType, rocksdb::Status> type_status;
 int ret = db->Expire(key, 1);
-if ((ret == 0) || !type_status[storage::DataType::kHashes].ok()) {
+if (ret == 0) {
     return false;
 }

Alternatively, if you need to check the status returned by Expire, modify the code as follows:

-static std::map<storage::DataType, rocksdb::Status> type_status;
-int ret = db->Expire(key, 1);
+storage::Status s = db->Expire(key, 1);
-if ((ret == 0) || !type_status[storage::DataType::kHashes].ok()) {
+if (!s.ok()) {
     return false;
 }

Likely invalid or redundant comment.


1468-1475: Ensure proper handling of expired keys in PKHScanTest

In the PKHScan test for Group 10, after expiring the key using make_expired, you are checking for s.IsNotFound(). Confirm that the make_expired function effectively expires the key and that the test reliably validates the expected behavior.

Run the following script to verify the expiration:

✅ Verification successful

Let me gather more information about the make_expired function and its implementation to verify the expiration handling.


Let me search for the implementation of make_expired function in the pkhashes test file to verify its behavior.


Let me try a different approach to find the implementation.


Based on the implementation found, I can now provide the final response:

The key expiration handling in PKHScanTest is properly implemented

The make_expired function correctly handles key expiration by:

  1. Setting a 1-second expiration using db->Expire(key, 1)
  2. Waiting for 2 seconds (sleep_for(2000ms)) to ensure the key has expired
  3. Verifying the expired state through IsNotFound() status check

The test properly validates that scanning an expired key returns a NotFound status and empty results.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Confirm that key "GP10_HSCAN_KEY" has expired.

# Test: Attempt to get the key. Expect: No results found.
ast-grep --pattern $'#define GP10_HSCAN_KEY' \
&& echo "Key exists" || echo "Key has expired or does not exist"

Length of output: 76


Script:

#!/bin/bash
# Search for make_expired function implementation
ast-grep --pattern 'make_expired($$$)' -l cpp

# Also search using ripgrep for any related implementation
rg "make_expired" -A 5

Length of output: 59850


Script:

#!/bin/bash
# Search for make_expired implementation in pkhashes_test.cc
rg -A 10 "make_expired.*pkhashes_test" 

# Also search for Expire implementation
ast-grep --pattern 'int.*Expire($$$)' -l cpp

Length of output: 87


Script:

#!/bin/bash
# Search for make_expired function in pkhashes_test.cc directly
rg -A 6 "^static.*make_expired" src/storage/tests/pkhashes_test.cc

# Also check the Expire function usage in the test
rg "db.*Expire" src/storage/tests/pkhashes_test.cc

Length of output: 1179

src/storage/src/storage.cc (1)

488-595: Addition of PKH methods enhances Pika Hash functionality*

The implementation of the new PKH* methods is well-structured and adheres to the existing code conventions. These additions significantly enhance the Pika Hash capabilities, providing more flexible data management and expiration control.

src/pika_command.cc (1)

485-564: PKHash Commands Added Correctly

The new PKHash commands have been added to the command table with appropriate initializations and flags. This enhances the command handling capabilities for PKHash operations.

Comment on lines +6 to +13
#ifndef PIKA_PKHASH_H_
#define PIKA_PKHASH_H_

#include "include/acl.h"
#include "include/pika_command.h"
#include "include/pika_db.h"
#include "storage/storage.h"

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider adding documentation and extracting common code.

The file would benefit from:

  1. Class-level documentation explaining the purpose and usage of each command
  2. Consider extracting common code (e.g., current_key(), status handling) into a base class specific to PKHash commands

Add documentation at the file level:

/**
 * @file pika_pkhash.h
 * @brief Command implementations for PKHash operations in Pika database.
 *
 * This file contains command classes that implement various hash operations including:
 * - Hash expiration (PKHExpire, PKHExpireat, PKHExpiretime)
 * - Hash persistence (PKHPersist)
 * - Hash manipulation (PKHGet, PKHSet, etc.)
 * - Hash scanning and iteration (PKHScan)
 */

Comment on lines +459 to +463
virtual void Clear() {
pattern_ = "*";
count_ = 10;
is_wt_ = false;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Initialize all member variables in Clear() method.

The cursor_ member variable is not initialized in the Clear() method, which could lead to undefined behavior.

Apply this fix:

   virtual void Clear() {
+    key_.clear();
     pattern_ = "*";
+    cursor_ = 0;
     count_ = 10;
     is_wt_ = false;
   }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
virtual void Clear() {
pattern_ = "*";
count_ = 10;
is_wt_ = false;
}
virtual void Clear() {
key_.clear();
pattern_ = "*";
cursor_ = 0;
count_ = 10;
is_wt_ = false;
}

Comment on lines +14 to +141
PKHExpireatCmd(const std::string& name, int arity, uint32_t flag)
: Cmd(name, arity, flag, static_cast<uint32_t>(AclCategory::PKHASH)) {}
std::vector<std::string> current_key() const override {
std::vector<std::string> res;
res.push_back(key_);
return res;
}
void Do() override;
void Split(const HintKeys& hint_keys) override {};
void Merge() override {};
Cmd* Clone() override { return new PKHExpireatCmd(*this); }

private:
std::string key_;
int64_t timestamp_ = 0;
int64_t numfields_ = 0;
std::vector<std::string> fields_;

rocksdb::Status s_;

void DoInitial() override;
void Clear() override {}
};
class PKHExpiretimeCmd : public Cmd {
public:
PKHExpiretimeCmd(const std::string& name, int arity, uint32_t flag)
: Cmd(name, arity, flag, static_cast<uint32_t>(AclCategory::PKHASH)) {}
std::vector<std::string> current_key() const override {
std::vector<std::string> res;
res.push_back(key_);
return res;
}
void Do() override;
void Split(const HintKeys& hint_keys) override {};
void Merge() override {};
Cmd* Clone() override { return new PKHExpiretimeCmd(*this); }

private:
std::string key_;
int64_t ttl_ = 0;
int64_t numfields_ = 0;
std::vector<std::string> fields_;

rocksdb::Status s_;

void DoInitial() override;
void Clear() override {}
};

class PKHPersistCmd : public Cmd {
public:
PKHPersistCmd(const std::string& name, int arity, uint32_t flag)
: Cmd(name, arity, flag, static_cast<uint32_t>(AclCategory::PKHASH)) {}
std::vector<std::string> current_key() const override {
std::vector<std::string> res;
res.push_back(key_);
return res;
}
void Do() override;
void Split(const HintKeys& hint_keys) override {};
void Merge() override {};
Cmd* Clone() override { return new PKHPersistCmd(*this); }

private:
std::string key_;
int64_t ttl_ = 0;
int64_t numfields_ = 0;
std::vector<std::string> fields_;

rocksdb::Status s_;

void DoInitial() override;
void Clear() override {}
};

class PKHTTLCmd : public Cmd {
public:
PKHTTLCmd(const std::string& name, int arity, uint32_t flag)
: Cmd(name, arity, flag, static_cast<uint32_t>(AclCategory::PKHASH)) {}
std::vector<std::string> current_key() const override {
std::vector<std::string> res;
res.push_back(key_);
return res;
}
void Do() override;
void Split(const HintKeys& hint_keys) override {};
void Merge() override {};
Cmd* Clone() override { return new PKHTTLCmd(*this); }

private:
std::string key_;
int64_t ttl_ = 0;
int64_t numfields_ = 0;
std::vector<std::string> fields_;

rocksdb::Status s_;

void DoInitial() override;
void Clear() override {}
};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Initialize member variables and add TTL validation.

The expiration-related commands (PKHExpire, PKHExpireat, PKHExpiretime, PKHPersist, PKHTTL) should:

  1. Initialize fields_ vector in constructor or Clear()
  2. Add validation for TTL/timestamp values

Example fix for PKHExpireCmd (apply similar pattern to other commands):

 class PKHExpireCmd : public Cmd {
 public:
   PKHExpireCmd(const std::string& name, int arity, uint32_t flag)
-      : Cmd(name, arity, flag, static_cast<uint32_t>(AclCategory::PKHASH)) {}
+      : Cmd(name, arity, flag, static_cast<uint32_t>(AclCategory::PKHASH)),
+        ttl_(0), numfields_(0) {}
   // ...
   void Clear() override {
+    key_.clear();
+    fields_.clear();
+    ttl_ = 0;
+    numfields_ = 0;
   }
 };

Committable suggestion was skipped due to low confidence.

Comment on lines +38 to +39
set(CMAKE_BUILD_TYPE "Debug")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -O0 -Wall -g3 -ggdb -fno-inline -fno-builtin-memcmp")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Remove hardcoded Debug build type and compiler flags.

The changes force Debug build type and add debug-specific compiler flags that disable optimizations. This could significantly impact performance in production environments and conflicts with the existing build type logic below (lines 46-55).

Consider one of these alternatives:

  1. Remove these lines and let the existing build type logic handle it:
-set(CMAKE_BUILD_TYPE "Debug")
-set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -O0 -Wall -g3 -ggdb -fno-inline -fno-builtin-memcmp")
  1. Move these settings to a development-only configuration:
option(PIKA_DEVELOPMENT_MODE "Enable development mode with debug settings" OFF)
if(PIKA_DEVELOPMENT_MODE)
    set(CMAKE_BUILD_TYPE "Debug")
    set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -O0 -Wall -g3 -ggdb -fno-inline -fno-builtin-memcmp")
endif()

Comment on lines +273 to +274
Status Append(const Slice& key, const Slice& value, int32_t* ret, int64_t* expired_timestamp_millsec,
std::string& out_new_value);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Use pointers consistently for output parameters

For consistency with the codebase style, output parameters should use pointers rather than references.

-Status Append(const Slice& key, const Slice& value, int32_t* ret, int64_t* expired_timestamp_millsec,
-               std::string& out_new_value);
+Status Append(const Slice& key, const Slice& value, int32_t* ret, int64_t* expired_timestamp_millsec,
+               std::string* out_new_value);

-Status BitOp(BitOpType op, const std::string& dest_key, const std::vector<std::string>& src_keys,
-              std::string& value_to_dest, int64_t* ret);
+Status BitOp(BitOpType op, const std::string& dest_key, const std::vector<std::string>& src_keys,
+              std::string* value_to_dest, int64_t* ret);

Also applies to: 283-284

Comment on lines +451 to +454
std::map<storage::DataType, rocksdb::Status> type_status;
db.Expire("HMSET_KEY", 1);
ASSERT_TRUE(type_status[storage::DataType::kHashes].ok());

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Correct error checking and remove unused type_status variable in PKHMSetTest

In the PKHMSetTest, the type_status map is declared but not used properly. The assertion ASSERT_TRUE(type_status[storage::DataType::kHashes].ok()); is invalid because type_status is empty.

Apply this diff to correct the error checking:

-std::map<storage::DataType, rocksdb::Status> type_status;
 db.Expire("HMSET_KEY", 1);
-ASSERT_TRUE(type_status[storage::DataType::kHashes].ok());
+ASSERT_TRUE(s.ok());

Capture and check the status returned by Expire:

+storage::Status s = db.Expire("HMSET_KEY", 1);
 ASSERT_TRUE(s.ok());
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
std::map<storage::DataType, rocksdb::Status> type_status;
db.Expire("HMSET_KEY", 1);
ASSERT_TRUE(type_status[storage::DataType::kHashes].ok());
storage::Status s = db.Expire("HMSET_KEY", 1);
ASSERT_TRUE(s.ok());

Comment on lines +986 to +989
std::map<storage::DataType, rocksdb::Status> type_status;
db.Expire("B_HKEYS_KEY", 1);
ASSERT_TRUE(type_status[storage::DataType::kHashes].ok());
std::this_thread::sleep_for(std::chrono::milliseconds(2000));
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Correct error checking and remove unused type_status variable in PKHKeys

In the PKHKeys test, the type_status map is declared but not initialized or used properly. The assertion ASSERT_TRUE(type_status[storage::DataType::kHashes].ok()); is invalid because type_status is empty.

Apply this diff to correct the error checking:

-std::map<storage::DataType, rocksdb::Status> type_status;
 db.Expire("B_HKEYS_KEY", 1);
-ASSERT_TRUE(type_status[storage::DataType::kHashes].ok());
+ASSERT_TRUE(s.ok());

Capture and check the status:

+storage::Status s = db.Expire("B_HKEYS_KEY", 1);
 ASSERT_TRUE(s.ok());
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
std::map<storage::DataType, rocksdb::Status> type_status;
db.Expire("B_HKEYS_KEY", 1);
ASSERT_TRUE(type_status[storage::DataType::kHashes].ok());
std::this_thread::sleep_for(std::chrono::milliseconds(2000));
storage::Status s = db.Expire("B_HKEYS_KEY", 1);
ASSERT_TRUE(s.ok());
std::this_thread::sleep_for(std::chrono::milliseconds(2000));

Comment on lines +1051 to +1054
std::map<storage::DataType, rocksdb::Status> type_status;
db.Expire("B_HVALS_KEY", 1);
ASSERT_TRUE(type_status[storage::DataType::kHashes].ok());
std::this_thread::sleep_for(std::chrono::milliseconds(2000));
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Correct error checking and remove unused type_status variable in PKHVals

In the PKHVals test, the type_status map is declared but not used properly. The assertion ASSERT_TRUE(type_status[storage::DataType::kHashes].ok()); is invalid due to the empty type_status.

Apply this diff to correct the error checking:

-std::map<storage::DataType, rocksdb::Status> type_status;
 db.Expire("B_HVALS_KEY", 1);
-ASSERT_TRUE(type_status[storage::DataType::kHashes].ok());
+ASSERT_TRUE(s.ok());

Capture and check the status:

+storage::Status s = db.Expire("B_HVALS_KEY", 1);
 ASSERT_TRUE(s.ok());

Committable suggestion was skipped due to low confidence.

Comment on lines +528 to +530
db.Expire("HDEL_TIMEOUT_KEY", 1);
ASSERT_TRUE(type_status[storage::DataType::kHashes].ok());
std::this_thread::sleep_for(std::chrono::milliseconds(2000));
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Correct error checking and remove unused type_status variable in PKHDel

In the PKHDel test, the type_status map is declared but not initialized. The assertion ASSERT_TRUE(type_status[storage::DataType::kHashes].ok()); does not function correctly since type_status is empty.

Apply this diff to correct the error checking:

-std::map<storage::DataType, rocksdb::Status> type_status;
 db.Expire("HDEL_TIMEOUT_KEY", 1);
-ASSERT_TRUE(type_status[storage::DataType::kHashes].ok());
+ASSERT_TRUE(s.ok());

Capture and check the status:

+storage::Status s = db.Expire("HDEL_TIMEOUT_KEY", 1);
 ASSERT_TRUE(s.ok());

Committable suggestion was skipped due to low confidence.

Comment on lines +831 to +834
std::map<storage::DataType, rocksdb::Status> type_status;
db.Expire("B_HGETALL_KEY", 1);
ASSERT_TRUE(type_status[storage::DataType::kHashes].ok());
std::this_thread::sleep_for(std::chrono::milliseconds(2000));
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Correct error checking and remove unused type_status variable in PKHGetall

In the PKHGetall test, the type_status map is declared but not properly used. The assertion ASSERT_TRUE(type_status[storage::DataType::kHashes].ok()); is invalid due to the empty type_status.

Apply this diff to correct the error checking:

-std::map<storage::DataType, rocksdb::Status> type_status;
 db.Expire("B_HGETALL_KEY", 1);
-ASSERT_TRUE(type_status[storage::DataType::kHashes].ok());
+ASSERT_TRUE(s.ok());

Capture and check the status:

+storage::Status s = db.Expire("B_HGETALL_KEY", 1);
 ASSERT_TRUE(s.ok());

Committable suggestion was skipped due to low confidence.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
✏️ Feature New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants