feat: Added in-memory storage for testing purposes#59
feat: Added in-memory storage for testing purposes#59Harshdev098 wants to merge 2 commits intolightningdevkit:mainfrom
Conversation
|
👋 I see @tnull was un-assigned. |
tnull
left a comment
There was a problem hiding this comment.
Thanks for looking into this!
Generally goes into the right direction, but we def. need to avoid re-allocating everything on every operation.
4980a75 to
25d57e3
Compare
|
@tnull Have done the required changes |
tnull
left a comment
There was a problem hiding this comment.
Looks much better, but I think we still need to handle global_version properly, even if we're currently not using it client-side.
|
🔔 1st Reminder Hey @tankyleo! This PR has been waiting for your review. |
25d57e3 to
9012e95
Compare
9012e95 to
3b434d0
Compare
|
@tankyleo Can you please review it! |
|
🔔 3rd Reminder Hey @tankyleo! This PR has been waiting for your review. |
|
🔔 4th Reminder Hey @tankyleo! This PR has been waiting for your review. |
|
🔔 5th Reminder Hey @tankyleo! This PR has been waiting for your review. |
rust/impls/src/in_memory_store.rs
Outdated
| version: record.version, | ||
| }), | ||
| }) | ||
| } else if request.key == GLOBAL_VERSION_KEY { |
There was a problem hiding this comment.
Looks like by the time we are here, we know the GLOBAL_VERSION_KEY does not have a value, otherwise guard.get would have returned Some previously. We can just return the GetObjectResponse below directly with version: 0.
There was a problem hiding this comment.
@Harshdev098 double checking things here, we still have this second branch the same as before ?
There was a problem hiding this comment.
Shouldn't we still return early if request.key == GLOBAL_VERSION_KEY rather than always first making the lookup?
There was a problem hiding this comment.
The first lookup handles cases where the GLOBAL_VERSION_KEY is some non-zero value. We want to check whether it's been set to some value in the map before returning the initial value in this branch.
3b434d0 to
e0c31bb
Compare
|
@tankyleo Have done with the required changes! Can you please review it |
8003119 to
5898609
Compare
tnull
left a comment
There was a problem hiding this comment.
When testing integration with LDK Node locally I found that the tests are currently failing. I now opened #62 to add LDK Node integration tests to our CI here. It would be great if that could land first, and we could also add a CI job for the in-memory store as part of this PR then, ensuring the implementation actually works as expected.
|
🔔 1st Reminder Hey @tankyleo! This PR has been waiting for your review. |
|
🔔 2nd Reminder Hey @tankyleo! This PR has been waiting for your review. |
|
@Harshdev098 Please rebase now that #62 landed to make use of the new CI checks here. |
|
🔔 8th Reminder Hey @tankyleo! This PR has been waiting for your review. |
|
🔔 9th Reminder Hey @tankyleo! This PR has been waiting for your review. |
|
🔔 10th Reminder Hey @tankyleo! This PR has been waiting for your review. |
|
🔔 11th Reminder Hey @tankyleo! This PR has been waiting for your review. |
|
🔔 12th Reminder Hey @tankyleo! This PR has been waiting for your review. |
|
🔔 13th Reminder Hey @tankyleo! This PR has been waiting for your review. |
|
@tankyleo Can you please tabke a look at it! |
|
🔔 14th Reminder Hey @tankyleo! This PR has been waiting for your review. |
|
🔔 15th Reminder Hey @tankyleo! This PR has been waiting for your review. |
|
This needs a rebase by now unfortunately, sorry! |
|
🔔 16th Reminder Hey @tankyleo! This PR has been waiting for your review. |
|
🔔 17th Reminder Hey @tankyleo! This PR has been waiting for your review. |
|
🔔 18th Reminder Hey @tankyleo! This PR has been waiting for your review. |
|
🔔 19th Reminder Hey @tankyleo! This PR has been waiting for your review. |
|
🔔 20th Reminder Hey @tankyleo! This PR has been waiting for your review. |
|
🔔 21st Reminder Hey @tankyleo! This PR has been waiting for your review. |
|
🔔 22nd Reminder Hey @tankyleo! This PR has been waiting for your review. |
f571b29 to
06eeb34
Compare
|
Thanks! Please feel free to re-request review once @tankyleo is happy with the current state. |
|
🔔 25th Reminder Hey @tankyleo! This PR has been waiting for your review. |
tankyleo
left a comment
There was a problem hiding this comment.
Thanks for your patience
| [server_config] | ||
| host = "127.0.0.1" | ||
| port = 8080 | ||
| bind_address = "127.0.0.1:8080" |
There was a problem hiding this comment.
Please make all the changes in this file in a single commit.
| path: ldk-node | ||
|
|
||
| - name: Build and Deploy VSS Server | ||
| - name: Create Postgres config |
There was a problem hiding this comment.
Let's not make any edits to the CI of the postgres backend in this PR. We'll use a cfg flag to enable the in-memory backend so that no edits to the config file will be needed.
| [server_config] | ||
| host = "127.0.0.1" | ||
| port = 8080 | ||
| bind_address = "127.0.0.1:8080" |
There was a problem hiding this comment.
Please keep all your changes for the CI of the in-memory server to a single commit.
| ) -> Result<(), VssError> { | ||
| let key = build_storage_key(user_token, store_id, &key_value.key); | ||
|
|
||
| if key_value.version == -1 { |
There was a problem hiding this comment.
This is used to validate a PutObjectRequest, and therefore should return a ConflictError in case the key does not exist, even in the case of a non-conditional write.
| let store_id = request.store_id; | ||
| let mut guard = self.store.lock().await; | ||
|
|
||
| execute_delete_object(&mut guard, &user_token, &store_id, &key_value); |
There was a problem hiding this comment.
Before we reach this point for conditional deletes, we still need to make sure that the version matches in case the key does exist. See rust/api/src/types.rs for further details.
There was a problem hiding this comment.
Let's remove all the changes in this file, and use a cfg flag to override the postgres backend configuration in src/main.rs. See how we use the noop_authorizer, we want to do something very similar for in-memory store as it will also be a developer tool and not something to be run in any production setting.
| if args.len() < 2 { | ||
| eprintln!("Usage: {} <config-file-path> [--in-memory]", args[0]); | ||
| std::process::exit(1); | ||
| } |
There was a problem hiding this comment.
A change of direction thank you: I prefer we use a RUSTFLAGS="--cfg in-memory-store" to set the backend, as it's a developer-only tool just like noop_authorizer. We can delete this as the postgres backend does not require a configuration file.
| std::process::exit(-1); | ||
| }); | ||
|
|
||
| let store: Arc<dyn KvStore> = if let Some(crt_pem) = config.tls_config { |
There was a problem hiding this comment.
Here let's use cfg flags to set the in-memory-store backend.
| # Maximum request body size in bytes. Can be set here or be overridden by env var 'VSS_MAX_REQUEST_BODY_SIZE' | ||
| # Defaults to the maximum possible value of 1 GB if unset. | ||
| # max_request_body_size = 1073741824 | ||
| store_type = "postgres" # "postgres" for using postgresql and "in-memory" for testing purposes |
There was a problem hiding this comment.
I prefer we make no changes to the config file for a developer-only tool.
| **Note:** For testing purposes, you can pass `--in-memory` to use in-memory instead of PostgreSQL | ||
| ``` | ||
| cargo run -- server/vss-server-config.toml --in-memory |
There was a problem hiding this comment.
We'll have to update this to use RUSTFLAGS instead.
Have added in_memory store for testing purpose.
We can edit config file to use specific store either postgresql or memory