Skip to content
This repository has been archived by the owner on Jun 18, 2020. It is now read-only.

How to run with standalone #57

Open
srikrishnacancun opened this issue Aug 11, 2016 · 8 comments
Open

How to run with standalone #57

srikrishnacancun opened this issue Aug 11, 2016 · 8 comments

Comments

@srikrishnacancun
Copy link

If I have my standalone spark cluster with hdfs/yarn configured , What changes are required to run this code?

@mbeitchman
Copy link
Contributor

HI,

Can you tell me which sample you are referring to?

Is your standalone cluster an EMR cluster?

@srikrishnacancun
Copy link
Author

Hi

I am referring to HadoopTerasort . Yes I want to run against my own
Standalone Spark cluster or Hadoop cluster. What needs to be modified if
any to make it work . We want the output to S3 like in the example. How big
of a file size we can process.?

https://github.com/awslabs/data-pipeline-samples/tree/master/samples/HadoopTerasort

Thanks

Srikrishna

On Thu, Aug 11, 2016 at 11:05 PM, Marc Beitchman [email protected]
wrote:

HI,

Can you tell me which sample you are referring to?

Is your standalone cluster an EMR cluster?


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
#57 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/APLpN9LdcyEkK06hHv_17MUvWbMAt-2Bks5qe11egaJpZM4Jh04Y
.

@mbeitchman
Copy link
Contributor

Hi Srikrishna,

You will need to run the taskrunner on your cluster. Please see this link for more details.

http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-using-task-runner.html

I think you can process as much as you want. Of course, runtime will depend on your cluster size.

Marc

@srikrishnacancun
Copy link
Author

Hi

Thanks for your quick response . Can you explain the high level steps so
that I can understand ?

I don't want to use EMR Cluster . I have a stand alone spark cluster 1.6.1
. I want to write the i/p and output to s3

Have nice day.

Thanks

Srikrishna

On Thu, Aug 11, 2016 at 11:51 PM, Marc Beitchman [email protected]
wrote:

Hi Srikrishna,

You will need to run the taskrunner on your cluster. Please see this link
for more details.

http://docs.aws.amazon.com/datapipeline/latest/
DeveloperGuide/dp-using-task-runner.html

I think you can process as much as you want. Of course, runtime will
depend on your cluster size.

Marc


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
#57 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/APLpN1kvuYKpIuQz-JIgxZKKoZk7ULQdks5qe2gYgaJpZM4Jh04Y
.

@srikrishnacancun
Copy link
Author

Hi

That means what you are saying you can replace EMR cluster with Physical
Server based Spark/Hadoop cluster ? Is that right ?

I am very eager to receive your response.

Thanks

Srikrishna

On Fri, Aug 12, 2016 at 12:03 AM, Srikrishna Parthasarathy <
[email protected]> wrote:

Hi

Thanks for your quick response . Can you explain the high level steps so
that I can understand ?

I don't want to use EMR Cluster . I have a stand alone spark cluster
1.6.1 . I want to write the i/p and output to s3

Have nice day.

Thanks

Srikrishna

On Thu, Aug 11, 2016 at 11:51 PM, Marc Beitchman <[email protected]

wrote:

Hi Srikrishna,

You will need to run the taskrunner on your cluster. Please see this link
for more details.

http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuid
e/dp-using-task-runner.html

I think you can process as much as you want. Of course, runtime will
depend on your cluster size.

Marc


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
#57 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/APLpN1kvuYKpIuQz-JIgxZKKoZk7ULQdks5qe2gYgaJpZM4Jh04Y
.

@mbeitchman
Copy link
Contributor

yes, that is correct. The task runner is an agent that runs on AWS or on premises resources to execute the activities in the pipeline. The above documentation will explain this in more detail. Please follow up if you have questions once you get started.

@srikrishnacancun
Copy link
Author

Hi

Thanks . How do you modify your script/code to include my standalone spark
cluster installation to create my custom pipeline .? Can you show the
code snippet ?

srikrishna

On Sat, Aug 13, 2016 at 2:40 AM, Marc Beitchman [email protected]
wrote:

yes, that is correct. The task runner is an agent that runs on AWS or on
premises resources to execute the activities in the pipeline. The above
documentation will explain this in more detail. Please follow up if you
have questions once you get started.


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
#57 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/APLpN8yRWOoCbP6SpbvuPy0faucKDis0ks5qfOE1gaJpZM4Jh04Y
.

@mbeitchman
Copy link
Contributor

To connect a Task Runner that you've installed to the pipeline activities it should process, add a workerGroup field to the object, and configure Task Runner to poll for that worker group value. You do this by passing the worker group string as a parameter (for example, --workerGroup=wg-12345) when you run the Task Runner JAR file.

http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-how-task-runner-user-managed.html

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants