Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UI Paging for Chirps History Pre-Consensus Protocol Integration #1796

Draft
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

Kaz-ookid
Copy link
Contributor

@Kaz-ookid Kaz-ookid commented Apr 3, 2024

EDIT :

This PR got stale, since the back-end network part has been delayed. The UI is functionnal and ready. This PR only needs to be taken over to adapt the for the new catchup messages introduced in #1822 and #1878. See also 3.1.3 in the 2024 Global Report. The Mocked paging needs to be replaced withing this PR, and merge this PR once everything is clean and finished. There is no gain in merging this mocked version alone.

Tests also have not been done, since it would need to be re-implemented after the mocking has been upgraded to the actualy behavior.

Summary

This PR introduces a UI update for paging through chirps history. It lays the groundwork for an upcoming backend consensus protocol, which, once implemented, will enable fetching the most recent n (actually chirps from index x to index y) chirps for efficient data handling and display.

Changes

  • Added a "Load More" button at the end of the chirp history list.
  • Implemented fake paging on the frontend to simulate loading chunks of chirp data.
  • Ensured that the current changes do not disrupt the application's functionality and can be easily reversed or adapted for true paging post-consensus implementation.

image

image

Rationale

Currently, connecting to a LAO results in fetching the entire chirp history in a single network call, leading to network strain and local storage bloat. With this provisional UI paging, we're setting the stage for handling chirp data in a scalable way, avoiding unnecessary data load when users may not need access to older messages.

Notes

  • This is a provisional feature mimicking the eventual paging functionality, aimed at improving current performance and preparing the UI for backend upgrades.
  • Full paging will be activated once the global ordering provided by the consensus protocol and thus a new query system is available.

Impact

  • Network Efficiency in the future: Reduces the amount of data transferred during initial LAO connection.
  • Storage Management in the future: Mitigates local storage saturation by displaying a limited set of chirps.
  • User Experience: Improves app responsiveness and user experience by avoiding overloading the UI with the full chirps history.

@Kaz-ookid Kaz-ookid requested a review from a team as a code owner April 3, 2024 01:14
@Kaz-ookid Kaz-ookid self-assigned this Apr 3, 2024
Copy link

github-actions bot commented Apr 3, 2024

Pull reviewers stats

Stats of the last 30 days for popstellar:

User Total reviews Time to review Total comments
MariemBaccari
🥇
3
▀▀▀
12d 5m
▀▀
2
▀▀▀
Tyratox
🥈
2
▀▀
4d 23h 53m
4
▀▀▀▀▀
4xiom5
🥉
1
1d 22h 57m
0
1florentin
1
6d 20h 7m
0
sgueissa
1
15d 23h 37m
▀▀▀
0
ljankoschek
1
7h 42m
0
matteosz
1
4d 6h 41m
1
Kaz-ookid
1
2d 5h 33m
1
⚡️ Pull request stats

@Kaz-ookid Kaz-ookid added enhancement New feature or request fe2-android labels Apr 3, 2024
@Kaz-ookid Kaz-ookid marked this pull request as draft April 3, 2024 16:14
Copy link

sonarqubecloud bot commented Apr 3, 2024

Quality Gate Passed Quality Gate passed for 'PoP - PoPCHA-Web-Client'

Issues
0 New issues
0 Accepted issues

Measures
0 Security Hotspots
No data about Coverage
No data about Duplication

See analysis details on SonarCloud

Copy link

sonarqubecloud bot commented Apr 3, 2024

Quality Gate Passed Quality Gate passed for 'PoP - Be2-Scala'

Issues
0 New issues
0 Accepted issues

Measures
0 Security Hotspots
No data about Coverage
No data about Duplication

See analysis details on SonarCloud

Copy link

sonarqubecloud bot commented Apr 3, 2024

Quality Gate Passed Quality Gate passed for 'PoP - Be1-Go'

Issues
0 New issues
0 Accepted issues

Measures
0 Security Hotspots
No data about Coverage
No data about Duplication

See analysis details on SonarCloud

Copy link

sonarqubecloud bot commented Apr 3, 2024

Quality Gate Failed Quality Gate failed for 'PoP - Fe2-Android'

Failed conditions
33.8% Coverage on New Code (required ≥ 65%)

See analysis details on SonarCloud

Copy link

sonarqubecloud bot commented Apr 3, 2024

Quality Gate Passed Quality Gate passed for 'PoP - Fe1-Web'

Issues
0 New issues
0 Accepted issues

Measures
0 Security Hotspots
No data about Coverage
No data about Duplication

See analysis details on SonarCloud

Copy link
Contributor

@matteosz matteosz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really nice work here!

Just an idea, not really a suggestion, feel free to proceed in the way that it's easier/more effective for you. As you mentioned in the PR description, there are many things which are provisionals until the backends aren't updated with the new way of retrieving chirps. Another way I see would be to still implement things in the way they would be in the future, and then set momentarily the initial number of chirps with a very large number, so it doesn't affect the app for now but it's easily extendible.

So, for instance, you could already start to manage the part about the DB. Which means to add a query function to the DB where you can just initially retrieve N chirps based on timestamp, so we won't load all the chirps in memory, since they wouldn't be displayed.

Then I'm also curious of how you're gonna handle some cases like: I ask just latest N chirps to the server, but then I have stored on disk other chirps that may have some intersection with the received set. So you could actually avoid asking for that N but just the ones you were missing (reduced message overhead but more complex). But I guess it's still a wip (?)

Comment on lines +185 to +186
private var storageIsLoaded: Boolean = false
private var reachedEndOfChirps: Boolean = false
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

storageIsLoaded seems a shared variable to me, at least theoretically. As they're accessed in the add method and when data is loaded from disk, which can in theory happen in parallel (usually it never happens as loading from storage is quite fast but I'd still use atomic variables). While reachedEndOfChirps shouldn't be shared if we always test if we can load more before actually loading the chirps.

@Kaz-ookid
Copy link
Contributor Author

Kaz-ookid commented Apr 4, 2024

So, for instance, you could already start to manage the part about the DB. Which means to add a query function to the DB where you can just initially retrieve N chirps based on timestamp, so we won't load all the chirps in memory, since they wouldn't be displayed.

Yes, maybe I can just change the query performed in getChirpsByLaoId (ChirpDao_Impl) to enable giving the list of IDs we want, and forget all about the sketchy setups in LaoChirps to mimic the behavior. Only thing is : since we don't have total ordering yet, message IDs are not referring to their place in the message history. The only way I see to mimic querying messages by ID (their place in the history order) is that we refer to their index when ordered by timestamp. Then, if we have the 10 latest chirps, we would query getChirpsByLaoId(laoID, {11, 12, ..., 20}) and set the query to order by timestamp, and return the 11th to 20th latest chirps. But seeing the look of loadStorage(), I will maybe have a hard time doing all this...

Then I'm also curious of how you're gonna handle some cases like: I ask just latest N chirps to the server, but then I have stored on disk other chirps that may have some intersection with the received set. So you could actually avoid asking for that N but just the ones you were missing (reduced message overhead but more complex). But I guess it's still a wip (?)

To me, there is not way to have "holes" in our chirps list. That said, it still is a nice thing to have and would work very well once consensus is available. Once we will have total ordering, message IDs will actually make sense and depict their order in the history. This means, if we have messages 1 to 20, and message 25 : we will be able to detect that we miss messages 21, 22, 23, 24. That way there will never be "hole" in the local history. But we still would be able to implement the query as I suggested above, meaning we ask for a list of indexes rather than a range. So we would be able to create this list of indexes and remove indexes we already have.

To sum it up, a lot of implementation details and optimizations will have to be done once we have consensus, as it surely is not worth it to painfully try to mimic it yet. But there sure is a way to mimic it better than done for now, by changing the SQL query rather than having sketchy behavior in LaoChirps and a bunch of TODO : Remove when.... Then we would not call this query once as it is done in loadStorage, but multiple times when we need older chirps.

@matteosz
Copy link
Contributor

matteosz commented Apr 4, 2024

So, for instance, you could already start to manage the part about the DB. Which means to add a query function to the DB where you can just initially retrieve N chirps based on timestamp, so we won't load all the chirps in memory, since they wouldn't be displayed.

Yes, maybe I can just change the query performed in getChirpsByLaoId (ChirpDao_Impl) to enable giving the list of IDs we want, and forget all about the sketchy setups in LaoChirps to mimic the behavior. Only thing is : since we don't have total ordering yet, message IDs are not referring to their place in the message history. The only way I see to mimic querying messages by ID (their place in the history order) is that we refer to their index when ordered by timestamp. Then, if we have the 10 latest chirps, we would query getChirpsByLaoId(laoID, {11, 12, ..., 20}) and set the query to order by timestamp, and return the 11th to 20th latest chirps. But seeing the look of loadStorage(), I will maybe have a hard time doing all this...

Then I'm also curious of how you're gonna handle some cases like: I ask just latest N chirps to the server, but then I have stored on disk other chirps that may have some intersection with the received set. So you could actually avoid asking for that N but just the ones you were missing (reduced message overhead but more complex). But I guess it's still a wip (?)

To me, there is not way to have "holes" in our chirps list. That said, it still is a nice thing to have and would work very well once consensus is available. Once we will have total ordering, message IDs will actually make sense and depict their order in the history. This means, if we have messages 1 to 20, and message 25 : we will be able to detect that we miss messages 21, 22, 23, 24. That way there will never be "hole" in the local history. But we still would be able to implement the query as I suggested above, meaning we ask for a list of indexes rather than a range. So we would be able to create this list of indexes and remove indexes we already have.

To sum it up, a lot of implementation details and optimizations will have to be done once we have consensus, as it surely is not worth it to painfully try to mimic it yet. But there sure is a way to mimic it better than done for now, by changing the SQL query rather than having sketchy behavior in LaoChirps and a bunch of TODO : Remove when.... Then we would not call this query once as it is done in loadStorage, but multiple times when we need older chirps.

Very clear now, thanks for explaining more! And yeah totally agree with your point, it would avoid refactoring too much the code later

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request fe2-android
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants