Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feat/evm devnet ready #772

Merged
merged 65 commits into from
Oct 31, 2024
Merged

Feat/evm devnet ready #772

merged 65 commits into from
Oct 31, 2024

Conversation

gztensor
Copy link
Contributor

@gztensor gztensor commented Aug 29, 2024

Description

Add EVM functionality to the chain

Related Issue(s)

n/a

Type of Change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Documentation update
  • Other (please describe):

Breaking Change

n/a

Checklist

  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have run cargo fmt and cargo clippy to ensure my code is formatted and linted correctly
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • Any dependent changes have been merged and published in downstream modules

Screenshots (if applicable)

n/a

Additional Notes

Testing can be done with instructions here:
https://github.com/gztensor/evm-demo

fixes #862

@gztensor gztensor requested a review from unconst as a code owner August 29, 2024 02:12
@gztensor gztensor changed the base branch from main to devnet-ready August 29, 2024 02:13
Copy link
Contributor

@sam0x17 sam0x17 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good, a few areas I think could be improved / explored

Comment on lines +119 to +123
Ok(FrontierPartialComponents {
filter_pool: Some(Arc::new(Mutex::new(BTreeMap::new()))),
fee_history_cache: Arc::new(Mutex::new(BTreeMap::new())),
fee_history_cache_limit: config.fee_history_limit,
})
Copy link
Contributor

@sam0x17 sam0x17 Aug 30, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be good do discuss the performance implications of having a BTreeMap behind an arc+mutex. Perhaps something like a dashmap would be more appropriate if key ordering doesn't matter? Ideally we use something with more fine-grained locking I would think, though I'm not at all familiar with the access pattern here so this might be reasonable. What is the access pattern?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

@sam0x17 sam0x17 Sep 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

gotcha. The litmus test would be lots of contention => dashmap

Comment on lines 127 to 208
pub async fn spawn_frontier_tasks(
task_manager: &TaskManager,
client: Arc<FullClient>,
backend: Arc<FullBackend>,
frontier_backend: Arc<FrontierBackend<FullClient>>,
filter_pool: Option<FilterPool>,
storage_override: Arc<dyn StorageOverride<Block>>,
fee_history_cache: FeeHistoryCache,
fee_history_cache_limit: FeeHistoryCacheLimit,
sync: Arc<SyncingService<Block>>,
pubsub_notification_sinks: Arc<
fc_mapping_sync::EthereumBlockNotificationSinks<
fc_mapping_sync::EthereumBlockNotification<Block>,
>,
>,
) {
// Spawn main mapping sync worker background task.
match &*frontier_backend {
fc_db::Backend::KeyValue(b) => {
task_manager.spawn_essential_handle().spawn(
"frontier-mapping-sync-worker",
Some("frontier"),
fc_mapping_sync::kv::MappingSyncWorker::new(
client.import_notification_stream(),
Duration::new(6, 0),
client.clone(),
backend,
storage_override.clone(),
b.clone(),
3,
0,
fc_mapping_sync::SyncStrategy::Normal,
sync,
pubsub_notification_sinks,
)
.for_each(|()| future::ready(())),
);
}
fc_db::Backend::Sql(b) => {
task_manager.spawn_essential_handle().spawn_blocking(
"frontier-mapping-sync-worker",
Some("frontier"),
fc_mapping_sync::sql::SyncWorker::run(
client.clone(),
backend,
b.clone(),
client.import_notification_stream(),
fc_mapping_sync::sql::SyncWorkerConfig {
read_notification_timeout: Duration::from_secs(30),
check_indexed_blocks_interval: Duration::from_secs(60),
},
fc_mapping_sync::SyncStrategy::Parachain,
sync,
pubsub_notification_sinks,
),
);
}
}

// Spawn Frontier EthFilterApi maintenance task.
if let Some(filter_pool) = filter_pool {
// Each filter is allowed to stay in the pool for 100 blocks.
const FILTER_RETAIN_THRESHOLD: u64 = 100;
task_manager.spawn_essential_handle().spawn(
"frontier-filter-pool",
Some("frontier"),
EthTask::filter_pool_task(client.clone(), filter_pool, FILTER_RETAIN_THRESHOLD),
);
}

// Spawn Frontier FeeHistory cache maintenance task.
task_manager.spawn_essential_handle().spawn(
"frontier-fee-history",
Some("frontier"),
EthTask::fee_history_task(
client,
storage_override,
fee_history_cache,
fee_history_cache_limit,
),
);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this looks good, but we should probably think about adding a mechanism for handling task failures... if any of these tasks fail, it could leave the node in an inconsistent state. Logging and retry mechanisms could be useful here. Basically would feel safer if it was more self-healing

Comment on lines 211 to 347
graph,
converter,
is_authority,
enable_dev_signer,
network,
sync,
frontier_backend,
storage_override,
block_data_cache,
filter_pool,
max_past_logs,
fee_history_cache,
fee_history_cache_limit,
execute_gas_limit_multiplier,
forced_parent_hashes,
pending_create_inherent_data_providers,
} = deps;

let mut signers = Vec::new();
if enable_dev_signer {
signers.push(Box::new(EthDevSigner::new()) as Box<dyn EthSigner>);
}

io.merge(
Eth::<B, C, P, CT, BE, A, CIDP, EC>::new(
client.clone(),
pool.clone(),
graph.clone(),
converter,
sync.clone(),
signers,
storage_override.clone(),
frontier_backend.clone(),
is_authority,
block_data_cache.clone(),
fee_history_cache,
fee_history_cache_limit,
execute_gas_limit_multiplier,
forced_parent_hashes,
pending_create_inherent_data_providers,
Some(Box::new(AuraConsensusDataProvider::new(client.clone()))),
)
.replace_config::<EC>()
.into_rpc(),
)?;

if let Some(filter_pool) = filter_pool {
io.merge(
EthFilter::new(
client.clone(),
frontier_backend.clone(),
graph.clone(),
filter_pool,
500_usize, // max stored filters
max_past_logs,
block_data_cache.clone(),
)
.into_rpc(),
)?;
}

io.merge(
EthPubSub::new(
pool,
client.clone(),
sync,
subscription_task_executor,
storage_override.clone(),
pubsub_notification_sinks,
)
.into_rpc(),
)?;

io.merge(
Net::new(
client.clone(),
network,
// Whether to format the `peer_count` response as Hex (default) or not.
true,
)
.into_rpc(),
)?;

io.merge(Web3::new(client.clone()).into_rpc())?;

io.merge(
Debug::new(
client.clone(),
frontier_backend,
storage_override,
block_data_cache,
)
.into_rpc(),
)?;

#[cfg(feature = "txpool")]
io.merge(TxPool::new(client, graph).into_rpc())?;

Ok(io)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I recommend breaking this up into helper functions, especially if we think this will grow to be even longer

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, please review @sam0x17

Comment on lines 96 to 103
pub fn bytes_to_account_id(account_id_bytes: &[u8]) -> Result<AccountId32, PrecompileFailure> {
AccountId32::from_slice(account_id_bytes).map_err(|_| {
log::info!("Error parsing account id bytes {:?}", account_id_bytes);
PrecompileFailure::Error {
exit_status: ExitError::InvalidRange,
}
})
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be a TryFrom

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, please review @sam0x17

@sam0x17 sam0x17 added the skip-cargo-audit This PR fails cargo audit but needs to be merged anyway label Oct 16, 2024
@sam0x17 sam0x17 linked an issue Oct 16, 2024 that may be closed by this pull request
Comment on lines +1049 to +1052
parameter_types! {
pub const SubtensorChainId: u64 = 0x03B1; // Unicode for lowercase alpha
// pub const SubtensorChainId: u64 = 0x03C4; // Unicode for lowercase tau
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gregzaitsev is this ok?

@orriin orriin marked this pull request as ready for review October 31, 2024 06:49
@unconst unconst merged commit 3fb89ec into devnet-ready Oct 31, 2024
13 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
skip-cargo-audit This PR fails cargo audit but needs to be merged anyway
Projects
None yet
Development

Successfully merging this pull request may close these issues.

add skip-cargo-audit label
6 participants