Contribution request: add support for SFtp and SSH for publishing and resolving

classic Classic list List threaded Threaded
38 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Re: Contribution request: add support for SFtp and SSH for publishing and resolving

Adam Murdoch

On 16 Mar 2014, at 12:38 am, Marcin Erdmann <[hidden email]> wrote:

I have another question. One of test cases in the spec is Publish via 'sftp' to ivy repository with multiple ivyPattern and artifactPattern configured - what do we want to verify here?

I wouldn’t bother. Provided that we have test coverage that publishing with a single pattern works for every transport, then the multiple pattern case can be covered with a single transport.

I've noticed that when publishing to such a repository the first ivy pattern and artifact pattern are used. Should the test case assert that this happens?


On Fri, Mar 14, 2014 at 8:22 PM, Marcin Erdmann <[hidden email]> wrote:



On Fri, Mar 14, 2014 at 7:56 PM, Adam Murdoch <[hidden email]> wrote:

Yet, there is plenty to be done on error handling and making sure all resources are being released even in error scenarios - so the boring but important parts.

I have one question about error handling. There are three types of failures listed in the spec: invalid credentials, cannot connect to the server and everything else (called server exception). I think that we probably want to extract all permission related errors (when both resolving and uploading) as a separate error type so that the error message provides user an idea that the error is permission related as it feels that it might be a common error. What do you think?

The key is to make the error message informative at this stage, which we don’t necessarily need a new type for. If we formalise ‘not authorised’ problems in the resource APIs then we would also need to update the file and http backed resource implementations to. So, I’d leave this as a later refactoring.

What I meant here by a new type is not a new exception class but to differentiate permission errors from generic errors when it comes to what message is being shown when such an error occurs. At the moment I'm throwing a new runtime exception of type SftpException with a message saying what has happened (invalid credentials, connection error) and where (host and port part of the url). I was asking if we should check if we have permissions to perform a given operation (file download or upload, directory creation, etc) and if not throw a SftpException with a message saying that permission was denied to do it.
 


Also, DefaultExternalResourceRepository requires an implementation of an ExternalResourceLister yet it is currently not exercised by any test - to be honest I haven't looked into how it's used and what kind of tests would be needed for it so any pointers would be appreciated. Do we want to add some tests to the initial story for that?

It’s used whenever there’s a dynamic version in a dependency declaration. So, if you’ve got one of these in one of the tests somewhere, then we're good.

I don't have a test like that yet so I will add some. Thanks for pointing me in the right direction.
 



On Thu, Mar 6, 2014 at 3:56 AM, Adam Murdoch <[hidden email]> wrote:

On 6 Mar 2014, at 7:33 am, Marcin Erdmann <[hidden email]> wrote:

I finally got my head around the contract of ExternalResourceAccessor and after doing some reverse engineering of HttpResourceAccessor and Ivy's SFTPResolver as well as skimming SFTP protocol specs the following are my findings and questions:

Error handling is far from perfect in both clients - they only return a generic error when you try to get file metadata for a file that doesn't exist but there seems to be a specific substatus for such situation called SSH_FX_NO_SUCH_FILE. I can see that this substatus is returned by SSHD's implementation of SFTP server when accessing files that don't exist even though the spec(http://tools.ietf.org/id/draft-ietf-secsh-filexfer-00.txt) cryptically describes it as "is returned when a reference is made to a file which should exist but doesn't" - this should but doesn't part baffles me a bit. Anyway I will assume that this substatus is returned whenever trying to stat a non-existing file.

I'm going to use SSHD's client as it seems to be better maintained and I like it more. I will be also able to easily change it's behavior when it comes to handling the aforementioned SSH_FX_NO_SUCH_FILE substatus.

We will probably want to throw a specific contextual exception when credentials are incorrect, right?

At this stage, I think it’s sufficient that the error message gives the user some idea that the credentials are incorrect. Eventually, it would be nice to tell the difference, if possible.


It feels like we want to have separate ssh sessions for getting metadata and getting the file - the fact that you request an ExternalResource doesn't mean that you will ever read it and thus close it, but you need to open a ssh session to get the metadata. So the idea here is to open a session, get metadata to see if the resource exists and close the session possibly passing the metadata to the created ExternalResource if the resource exists. Then, if resource contents are requested open a new session and close it when close() is called on the ExternalResource. Are we ok with such approach?

Yes, for now. I think at some point we will rework the ExternalResourceAccessor interface so that it can better deal with the fact that for some transports (file, sftp), the meta-data for a resource and the content for a resource have to be fetched as separate operations. The interface at the moment assumes that it’s cheap-ish to get the meta-data at the same time the content is requested.

This way, the caller can decide whether it needs the meta-data or not, and invoke the appropriate operations - get meta-data, get content or get meta-data and content (if supported).


--
Adam Murdoch
Gradle Co-founder
http://www.gradle.org
VP of Engineering, Gradleware Inc. - Gradle Training, Support, Consulting
http://www.gradleware.com






--
Adam Murdoch
Gradle Co-founder
http://www.gradle.org
VP of Engineering, Gradleware Inc. - Gradle Training, Support, Consulting
http://www.gradleware.com







--
Adam Murdoch
Gradle Co-founder
http://www.gradle.org
VP of Engineering, Gradleware Inc. - Gradle Training, Support, Consulting
http://www.gradleware.com



Reply | Threaded
Open this post in threaded view
|

Re: Contribution request: add support for SFtp and SSH for publishing and resolving

erdi
I think it's the time for someone to verify what I managed to come up with at: https://github.com/erdi/gradle/commit/630003850f9dc562ce7a757ef04ffd5957628a62. Would be cool to know what should be improved and decide what's next - do we merge it in or do we continue with the next story.

Some notes:
- As suggested I didn't implement a test for publishing to a repo with multiple artifact/ivy patterns.
- I decided not to implement a test for generic error handling when "server throws an exception" - I couldn't come up with a way to simulate such situation using current SFTPServer fixture and I've learned that when IOExceptions are thrown from a resource accessor, uploader or lister then gradle provides decent error messages about what it was doing when a failure occurred. Also, when an unexpected response is coming back from the server SSHD's sftp client puts status code in the exception message it throws so it is possible to learn what exactly has happened. Maybe not the easiest tasks as it involves looking at the client code or sftp specs to learn what that given error code means but it's possible
- I didn't do anything special for permission errors. It would involve quite a lot of changes to SSHD's sftp client and I didn't think that this is of such significance to actually put the effort in
- I moved transport instantiation from DefaultIvyRepository to RepositoryTransport factory. It felt a bit premature is it's still not used by the maven part of things but I did what the design document said
- I did not implement getResourceSha1() in SftpResourceAccessor - the contract says that it is allowed to return null and none of the test that were required exercise that method so it didn't feel right to implement it at this point in time
- I had to create IvySftpModule even though it only delegtes to a backing repository and return it from IvySftpRepository.module() to get IvySftpRepository to compile
- I simplified SFTPServer after VirtualFileSystemView was added in a recent version of SSHD. I needed to update dependency on SSHD as the SftpClient was only introduced recently.

Cheers,
Marcin


On Sun, Mar 16, 2014 at 8:04 PM, Adam Murdoch <[hidden email]> wrote:

On 16 Mar 2014, at 12:38 am, Marcin Erdmann <[hidden email]> wrote:

I have another question. One of test cases in the spec is Publish via 'sftp' to ivy repository with multiple ivyPattern and artifactPattern configured - what do we want to verify here?

I wouldn’t bother. Provided that we have test coverage that publishing with a single pattern works for every transport, then the multiple pattern case can be covered with a single transport.

I've noticed that when publishing to such a repository the first ivy pattern and artifact pattern are used. Should the test case assert that this happens?


On Fri, Mar 14, 2014 at 8:22 PM, Marcin Erdmann <[hidden email]> wrote:



On Fri, Mar 14, 2014 at 7:56 PM, Adam Murdoch <[hidden email]> wrote:

Yet, there is plenty to be done on error handling and making sure all resources are being released even in error scenarios - so the boring but important parts.

I have one question about error handling. There are three types of failures listed in the spec: invalid credentials, cannot connect to the server and everything else (called server exception). I think that we probably want to extract all permission related errors (when both resolving and uploading) as a separate error type so that the error message provides user an idea that the error is permission related as it feels that it might be a common error. What do you think?

The key is to make the error message informative at this stage, which we don’t necessarily need a new type for. If we formalise ‘not authorised’ problems in the resource APIs then we would also need to update the file and http backed resource implementations to. So, I’d leave this as a later refactoring.

What I meant here by a new type is not a new exception class but to differentiate permission errors from generic errors when it comes to what message is being shown when such an error occurs. At the moment I'm throwing a new runtime exception of type SftpException with a message saying what has happened (invalid credentials, connection error) and where (host and port part of the url). I was asking if we should check if we have permissions to perform a given operation (file download or upload, directory creation, etc) and if not throw a SftpException with a message saying that permission was denied to do it.
 


Also, DefaultExternalResourceRepository requires an implementation of an ExternalResourceLister yet it is currently not exercised by any test - to be honest I haven't looked into how it's used and what kind of tests would be needed for it so any pointers would be appreciated. Do we want to add some tests to the initial story for that?

It’s used whenever there’s a dynamic version in a dependency declaration. So, if you’ve got one of these in one of the tests somewhere, then we're good.

I don't have a test like that yet so I will add some. Thanks for pointing me in the right direction.
 



On Thu, Mar 6, 2014 at 3:56 AM, Adam Murdoch <[hidden email]> wrote:

On 6 Mar 2014, at 7:33 am, Marcin Erdmann <[hidden email]> wrote:

I finally got my head around the contract of ExternalResourceAccessor and after doing some reverse engineering of HttpResourceAccessor and Ivy's SFTPResolver as well as skimming SFTP protocol specs the following are my findings and questions:

Error handling is far from perfect in both clients - they only return a generic error when you try to get file metadata for a file that doesn't exist but there seems to be a specific substatus for such situation called SSH_FX_NO_SUCH_FILE. I can see that this substatus is returned by SSHD's implementation of SFTP server when accessing files that don't exist even though the spec(http://tools.ietf.org/id/draft-ietf-secsh-filexfer-00.txt) cryptically describes it as "is returned when a reference is made to a file which should exist but doesn't" - this should but doesn't part baffles me a bit. Anyway I will assume that this substatus is returned whenever trying to stat a non-existing file.

I'm going to use SSHD's client as it seems to be better maintained and I like it more. I will be also able to easily change it's behavior when it comes to handling the aforementioned SSH_FX_NO_SUCH_FILE substatus.

We will probably want to throw a specific contextual exception when credentials are incorrect, right?

At this stage, I think it’s sufficient that the error message gives the user some idea that the credentials are incorrect. Eventually, it would be nice to tell the difference, if possible.


It feels like we want to have separate ssh sessions for getting metadata and getting the file - the fact that you request an ExternalResource doesn't mean that you will ever read it and thus close it, but you need to open a ssh session to get the metadata. So the idea here is to open a session, get metadata to see if the resource exists and close the session possibly passing the metadata to the created ExternalResource if the resource exists. Then, if resource contents are requested open a new session and close it when close() is called on the ExternalResource. Are we ok with such approach?

Yes, for now. I think at some point we will rework the ExternalResourceAccessor interface so that it can better deal with the fact that for some transports (file, sftp), the meta-data for a resource and the content for a resource have to be fetched as separate operations. The interface at the moment assumes that it’s cheap-ish to get the meta-data at the same time the content is requested.

This way, the caller can decide whether it needs the meta-data or not, and invoke the appropriate operations - get meta-data, get content or get meta-data and content (if supported).


--
Adam Murdoch
Gradle Co-founder
http://www.gradle.org
VP of Engineering, Gradleware Inc. - Gradle Training, Support, Consulting
http://www.gradleware.com






--
Adam Murdoch
Gradle Co-founder
http://www.gradle.org
VP of Engineering, Gradleware Inc. - Gradle Training, Support, Consulting
http://www.gradleware.com







--
Adam Murdoch
Gradle Co-founder
http://www.gradle.org
VP of Engineering, Gradleware Inc. - Gradle Training, Support, Consulting
http://www.gradleware.com




Reply | Threaded
Open this post in threaded view
|

Re: Contribution request: add support for SFtp and SSH for publishing and resolving

Adam Murdoch

On 20 Mar 2014, at 6:42 am, Marcin Erdmann <[hidden email]> wrote:

I think it's the time for someone to verify what I managed to come up with at: https://github.com/erdi/gradle/commit/630003850f9dc562ce7a757ef04ffd5957628a62. Would be cool to know what should be improved and decide what's next - do we merge it in or do we continue with the next story.

Your changes look really good to me. I added some comments on the commit, but I think they are all things we can address in later commits. I’d say let’s merge this and go from there.


Some notes:
- As suggested I didn't implement a test for publishing to a repo with multiple artifact/ivy patterns.
- I decided not to implement a test for generic error handling when "server throws an exception" - I couldn't come up with a way to simulate such situation using current SFTPServer fixture

It becomes important for caching, as we want to remember ‘not found’ differently to ‘it blew up’.

I suspect you can get an internal failure by throwing an exception out of the VirtualFileSystemView.

and I've learned that when IOExceptions are thrown from a resource accessor, uploader or lister then gradle provides decent error messages about what it was doing when a failure occurred. Also, when an unexpected response is coming back from the server SSHD's sftp client puts status code in the exception message it throws so it is possible to learn what exactly has happened. Maybe not the easiest tasks as it involves looking at the client code or sftp specs to learn what that given error code means but it's possible
- I didn't do anything special for permission errors. It would involve quite a lot of changes to SSHD's sftp client and I didn't think that this is of such significance to actually put the effort in
- I moved transport instantiation from DefaultIvyRepository to RepositoryTransport factory. It felt a bit premature is it's still not used by the maven part of things but I did what the design document said
- I did not implement getResourceSha1() in SftpResourceAccessor - the contract says that it is allowed to return null and none of the test that were required exercise that method so it didn't feel right to implement it at this point in time
- I had to create IvySftpModule even though it only delegtes to a backing repository and return it from IvySftpRepository.module() to get IvySftpRepository to compile
- I simplified SFTPServer after VirtualFileSystemView was added in a recent version of SSHD. I needed to update dependency on SSHD as the SftpClient was only introduced recently.

Cheers,
Marcin


On Sun, Mar 16, 2014 at 8:04 PM, Adam Murdoch <[hidden email]> wrote:

On 16 Mar 2014, at 12:38 am, Marcin Erdmann <[hidden email]> wrote:

I have another question. One of test cases in the spec is Publish via 'sftp' to ivy repository with multiple ivyPattern and artifactPattern configured - what do we want to verify here?

I wouldn’t bother. Provided that we have test coverage that publishing with a single pattern works for every transport, then the multiple pattern case can be covered with a single transport.

I've noticed that when publishing to such a repository the first ivy pattern and artifact pattern are used. Should the test case assert that this happens?


On Fri, Mar 14, 2014 at 8:22 PM, Marcin Erdmann <[hidden email]> wrote:



On Fri, Mar 14, 2014 at 7:56 PM, Adam Murdoch <[hidden email]> wrote:

Yet, there is plenty to be done on error handling and making sure all resources are being released even in error scenarios - so the boring but important parts.

I have one question about error handling. There are three types of failures listed in the spec: invalid credentials, cannot connect to the server and everything else (called server exception). I think that we probably want to extract all permission related errors (when both resolving and uploading) as a separate error type so that the error message provides user an idea that the error is permission related as it feels that it might be a common error. What do you think?

The key is to make the error message informative at this stage, which we don’t necessarily need a new type for. If we formalise ‘not authorised’ problems in the resource APIs then we would also need to update the file and http backed resource implementations to. So, I’d leave this as a later refactoring.

What I meant here by a new type is not a new exception class but to differentiate permission errors from generic errors when it comes to what message is being shown when such an error occurs. At the moment I'm throwing a new runtime exception of type SftpException with a message saying what has happened (invalid credentials, connection error) and where (host and port part of the url). I was asking if we should check if we have permissions to perform a given operation (file download or upload, directory creation, etc) and if not throw a SftpException with a message saying that permission was denied to do it.
 


Also, DefaultExternalResourceRepository requires an implementation of an ExternalResourceLister yet it is currently not exercised by any test - to be honest I haven't looked into how it's used and what kind of tests would be needed for it so any pointers would be appreciated. Do we want to add some tests to the initial story for that?

It’s used whenever there’s a dynamic version in a dependency declaration. So, if you’ve got one of these in one of the tests somewhere, then we're good.

I don't have a test like that yet so I will add some. Thanks for pointing me in the right direction.
 



On Thu, Mar 6, 2014 at 3:56 AM, Adam Murdoch <[hidden email]> wrote:

On 6 Mar 2014, at 7:33 am, Marcin Erdmann <[hidden email]> wrote:

I finally got my head around the contract of ExternalResourceAccessor and after doing some reverse engineering of HttpResourceAccessor and Ivy's SFTPResolver as well as skimming SFTP protocol specs the following are my findings and questions:

Error handling is far from perfect in both clients - they only return a generic error when you try to get file metadata for a file that doesn't exist but there seems to be a specific substatus for such situation called SSH_FX_NO_SUCH_FILE. I can see that this substatus is returned by SSHD's implementation of SFTP server when accessing files that don't exist even though the spec(http://tools.ietf.org/id/draft-ietf-secsh-filexfer-00.txt) cryptically describes it as "is returned when a reference is made to a file which should exist but doesn't" - this should but doesn't part baffles me a bit. Anyway I will assume that this substatus is returned whenever trying to stat a non-existing file.

I'm going to use SSHD's client as it seems to be better maintained and I like it more. I will be also able to easily change it's behavior when it comes to handling the aforementioned SSH_FX_NO_SUCH_FILE substatus.

We will probably want to throw a specific contextual exception when credentials are incorrect, right?

At this stage, I think it’s sufficient that the error message gives the user some idea that the credentials are incorrect. Eventually, it would be nice to tell the difference, if possible.


It feels like we want to have separate ssh sessions for getting metadata and getting the file - the fact that you request an ExternalResource doesn't mean that you will ever read it and thus close it, but you need to open a ssh session to get the metadata. So the idea here is to open a session, get metadata to see if the resource exists and close the session possibly passing the metadata to the created ExternalResource if the resource exists. Then, if resource contents are requested open a new session and close it when close() is called on the ExternalResource. Are we ok with such approach?

Yes, for now. I think at some point we will rework the ExternalResourceAccessor interface so that it can better deal with the fact that for some transports (file, sftp), the meta-data for a resource and the content for a resource have to be fetched as separate operations. The interface at the moment assumes that it’s cheap-ish to get the meta-data at the same time the content is requested.

This way, the caller can decide whether it needs the meta-data or not, and invoke the appropriate operations - get meta-data, get content or get meta-data and content (if supported).


--
Adam Murdoch
Gradle Co-founder
http://www.gradle.org
VP of Engineering, Gradleware Inc. - Gradle Training, Support, Consulting
http://www.gradleware.com






--
Adam Murdoch
Gradle Co-founder
http://www.gradle.org
VP of Engineering, Gradleware Inc. - Gradle Training, Support, Consulting
http://www.gradleware.com







--
Adam Murdoch
Gradle Co-founder
http://www.gradle.org
VP of Engineering, Gradleware Inc. - Gradle Training, Support, Consulting
http://www.gradleware.com






--
Adam Murdoch
Gradle Co-founder
http://www.gradle.org
VP of Engineering, Gradleware Inc. - Gradle Training, Support, Consulting
http://www.gradleware.com



Reply | Threaded
Open this post in threaded view
|

Re: Contribution request: add support for SFtp and SSH for publishing and resolving

erdi
On Wed, Mar 19, 2014 at 10:55 PM, Adam Murdoch <[hidden email]> wrote:

Your changes look really good to me. I added some comments on the commit, but I think they are all things we can address in later commits. I’d say let’s merge this and go from there.

I created a pull request (https://github.com/gradle/gradle/pull/256) and will address your comments.
 


Some notes:
- As suggested I didn't implement a test for publishing to a repo with multiple artifact/ivy patterns.
- I decided not to implement a test for generic error handling when "server throws an exception" - I couldn't come up with a way to simulate such situation using current SFTPServer fixture

It becomes important for caching, as we want to remember ‘not found’ differently to ‘it blew up’.

I suspect you can get an internal failure by throwing an exception out of the VirtualFileSystemView.

Ok, I will change SFTPServer to use a VirtualFileSystemView from a field so that a different instance can be provided on a per test basis.
 

and I've learned that when IOExceptions are thrown from a resource accessor, uploader or lister then gradle provides decent error messages about what it was doing when a failure occurred. Also, when an unexpected response is coming back from the server SSHD's sftp client puts status code in the exception message it throws so it is possible to learn what exactly has happened. Maybe not the easiest tasks as it involves looking at the client code or sftp specs to learn what that given error code means but it's possible
- I didn't do anything special for permission errors. It would involve quite a lot of changes to SSHD's sftp client and I didn't think that this is of such significance to actually put the effort in
- I moved transport instantiation from DefaultIvyRepository to RepositoryTransport factory. It felt a bit premature is it's still not used by the maven part of things but I did what the design document said
- I did not implement getResourceSha1() in SftpResourceAccessor - the contract says that it is allowed to return null and none of the test that were required exercise that method so it didn't feel right to implement it at this point in time
- I had to create IvySftpModule even though it only delegtes to a backing repository and return it from IvySftpRepository.module() to get IvySftpRepository to compile
- I simplified SFTPServer after VirtualFileSystemView was added in a recent version of SSHD. I needed to update dependency on SSHD as the SftpClient was only introduced recently.

Cheers,
Marcin


On Sun, Mar 16, 2014 at 8:04 PM, Adam Murdoch <[hidden email]> wrote:

On 16 Mar 2014, at 12:38 am, Marcin Erdmann <[hidden email]> wrote:

I have another question. One of test cases in the spec is Publish via 'sftp' to ivy repository with multiple ivyPattern and artifactPattern configured - what do we want to verify here?

I wouldn’t bother. Provided that we have test coverage that publishing with a single pattern works for every transport, then the multiple pattern case can be covered with a single transport.

I've noticed that when publishing to such a repository the first ivy pattern and artifact pattern are used. Should the test case assert that this happens?


On Fri, Mar 14, 2014 at 8:22 PM, Marcin Erdmann <[hidden email]> wrote:



On Fri, Mar 14, 2014 at 7:56 PM, Adam Murdoch <[hidden email]> wrote:

Yet, there is plenty to be done on error handling and making sure all resources are being released even in error scenarios - so the boring but important parts.

I have one question about error handling. There are three types of failures listed in the spec: invalid credentials, cannot connect to the server and everything else (called server exception). I think that we probably want to extract all permission related errors (when both resolving and uploading) as a separate error type so that the error message provides user an idea that the error is permission related as it feels that it might be a common error. What do you think?

The key is to make the error message informative at this stage, which we don’t necessarily need a new type for. If we formalise ‘not authorised’ problems in the resource APIs then we would also need to update the file and http backed resource implementations to. So, I’d leave this as a later refactoring.

What I meant here by a new type is not a new exception class but to differentiate permission errors from generic errors when it comes to what message is being shown when such an error occurs. At the moment I'm throwing a new runtime exception of type SftpException with a message saying what has happened (invalid credentials, connection error) and where (host and port part of the url). I was asking if we should check if we have permissions to perform a given operation (file download or upload, directory creation, etc) and if not throw a SftpException with a message saying that permission was denied to do it.
 


Also, DefaultExternalResourceRepository requires an implementation of an ExternalResourceLister yet it is currently not exercised by any test - to be honest I haven't looked into how it's used and what kind of tests would be needed for it so any pointers would be appreciated. Do we want to add some tests to the initial story for that?

It’s used whenever there’s a dynamic version in a dependency declaration. So, if you’ve got one of these in one of the tests somewhere, then we're good.

I don't have a test like that yet so I will add some. Thanks for pointing me in the right direction.
 



On Thu, Mar 6, 2014 at 3:56 AM, Adam Murdoch <[hidden email]> wrote:

On 6 Mar 2014, at 7:33 am, Marcin Erdmann <[hidden email]> wrote:

I finally got my head around the contract of ExternalResourceAccessor and after doing some reverse engineering of HttpResourceAccessor and Ivy's SFTPResolver as well as skimming SFTP protocol specs the following are my findings and questions:

Error handling is far from perfect in both clients - they only return a generic error when you try to get file metadata for a file that doesn't exist but there seems to be a specific substatus for such situation called SSH_FX_NO_SUCH_FILE. I can see that this substatus is returned by SSHD's implementation of SFTP server when accessing files that don't exist even though the spec(http://tools.ietf.org/id/draft-ietf-secsh-filexfer-00.txt) cryptically describes it as "is returned when a reference is made to a file which should exist but doesn't" - this should but doesn't part baffles me a bit. Anyway I will assume that this substatus is returned whenever trying to stat a non-existing file.

I'm going to use SSHD's client as it seems to be better maintained and I like it more. I will be also able to easily change it's behavior when it comes to handling the aforementioned SSH_FX_NO_SUCH_FILE substatus.

We will probably want to throw a specific contextual exception when credentials are incorrect, right?

At this stage, I think it’s sufficient that the error message gives the user some idea that the credentials are incorrect. Eventually, it would be nice to tell the difference, if possible.


It feels like we want to have separate ssh sessions for getting metadata and getting the file - the fact that you request an ExternalResource doesn't mean that you will ever read it and thus close it, but you need to open a ssh session to get the metadata. So the idea here is to open a session, get metadata to see if the resource exists and close the session possibly passing the metadata to the created ExternalResource if the resource exists. Then, if resource contents are requested open a new session and close it when close() is called on the ExternalResource. Are we ok with such approach?

Yes, for now. I think at some point we will rework the ExternalResourceAccessor interface so that it can better deal with the fact that for some transports (file, sftp), the meta-data for a resource and the content for a resource have to be fetched as separate operations. The interface at the moment assumes that it’s cheap-ish to get the meta-data at the same time the content is requested.

This way, the caller can decide whether it needs the meta-data or not, and invoke the appropriate operations - get meta-data, get content or get meta-data and content (if supported).


--
Adam Murdoch
Gradle Co-founder
http://www.gradle.org
VP of Engineering, Gradleware Inc. - Gradle Training, Support, Consulting
http://www.gradleware.com






--
Adam Murdoch
Gradle Co-founder
http://www.gradle.org
VP of Engineering, Gradleware Inc. - Gradle Training, Support, Consulting
http://www.gradleware.com







--
Adam Murdoch
Gradle Co-founder
http://www.gradle.org
VP of Engineering, Gradleware Inc. - Gradle Training, Support, Consulting
http://www.gradleware.com






--
Adam Murdoch
Gradle Co-founder
http://www.gradle.org
VP of Engineering, Gradleware Inc. - Gradle Training, Support, Consulting
http://www.gradleware.com




Reply | Threaded
Open this post in threaded view
|

Re: Contribution request: add support for SFtp and SSH for publishing and resolving

Adam Murdoch

On 22 Mar 2014, at 5:52 am, Marcin Erdmann <[hidden email]> wrote:

On Wed, Mar 19, 2014 at 10:55 PM, Adam Murdoch <[hidden email]> wrote:

Your changes look really good to me. I added some comments on the commit, but I think they are all things we can address in later commits. I’d say let’s merge this and go from there.

I created a pull request (https://github.com/gradle/gradle/pull/256) and will address your comments.

Great, thank you.


 


Some notes:
- As suggested I didn't implement a test for publishing to a repo with multiple artifact/ivy patterns.
- I decided not to implement a test for generic error handling when "server throws an exception" - I couldn't come up with a way to simulate such situation using current SFTPServer fixture

It becomes important for caching, as we want to remember ‘not found’ differently to ‘it blew up’.

I suspect you can get an internal failure by throwing an exception out of the VirtualFileSystemView.

Ok, I will change SFTPServer to use a VirtualFileSystemView from a field so that a different instance can be provided on a per test basis.

I’d expect we’d end up with something like the HttpServer fixture, which allows the test to set certain expectations about what requests will be made, and also allows the test to inject failures.


--
Adam Murdoch
Gradle Co-founder
http://www.gradle.org
VP of Engineering, Gradleware Inc. - Gradle Training, Support, Consulting
http://www.gradleware.com

Join us for Gradle Summit 2014, June 12th and 13th in Santa Clara, CA: http://www.gradlesummit.com

Reply | Threaded
Open this post in threaded view
|

Re: Contribution request: add support for SFtp and SSH for publishing and resolving

erdi
I created a new pull request with some improvements: https://github.com/gradle/gradle/pull/259. I will be adding to it as I go. Fell free to pull it when you see fit.

The next step will be to modify SFTPServer to set expectations and inject failures.


On Sun, Mar 23, 2014 at 11:25 PM, Adam Murdoch <[hidden email]> wrote:

On 22 Mar 2014, at 5:52 am, Marcin Erdmann <[hidden email]> wrote:

On Wed, Mar 19, 2014 at 10:55 PM, Adam Murdoch <[hidden email]> wrote:

Your changes look really good to me. I added some comments on the commit, but I think they are all things we can address in later commits. I’d say let’s merge this and go from there.

I created a pull request (https://github.com/gradle/gradle/pull/256) and will address your comments.

Great, thank you.


 


Some notes:
- As suggested I didn't implement a test for publishing to a repo with multiple artifact/ivy patterns.
- I decided not to implement a test for generic error handling when "server throws an exception" - I couldn't come up with a way to simulate such situation using current SFTPServer fixture

It becomes important for caching, as we want to remember ‘not found’ differently to ‘it blew up’.

I suspect you can get an internal failure by throwing an exception out of the VirtualFileSystemView.

Ok, I will change SFTPServer to use a VirtualFileSystemView from a field so that a different instance can be provided on a per test basis.

I’d expect we’d end up with something like the HttpServer fixture, which allows the test to set certain expectations about what requests will be made, and also allows the test to inject failures.


--
Adam Murdoch
Gradle Co-founder
http://www.gradle.org
VP of Engineering, Gradleware Inc. - Gradle Training, Support, Consulting
http://www.gradleware.com

Join us for Gradle Summit 2014, June 12th and 13th in Santa Clara, CA: http://www.gradlesummit.com


Reply | Threaded
Open this post in threaded view
|

Re: Contribution request: add support for SFtp and SSH for publishing and resolving

Adam Murdoch

Great. Thanks for the pull request.

On 1 Apr 2014, at 6:13 am, Marcin Erdmann <[hidden email]> wrote:

I created a new pull request with some improvements: https://github.com/gradle/gradle/pull/259. I will be adding to it as I go. Fell free to pull it when you see fit.

The next step will be to modify SFTPServer to set expectations and inject failures.


On Sun, Mar 23, 2014 at 11:25 PM, Adam Murdoch <[hidden email]> wrote:

On 22 Mar 2014, at 5:52 am, Marcin Erdmann <[hidden email]> wrote:

On Wed, Mar 19, 2014 at 10:55 PM, Adam Murdoch <[hidden email]> wrote:

Your changes look really good to me. I added some comments on the commit, but I think they are all things we can address in later commits. I’d say let’s merge this and go from there.

I created a pull request (https://github.com/gradle/gradle/pull/256) and will address your comments.

Great, thank you.


 


Some notes:
- As suggested I didn't implement a test for publishing to a repo with multiple artifact/ivy patterns.
- I decided not to implement a test for generic error handling when "server throws an exception" - I couldn't come up with a way to simulate such situation using current SFTPServer fixture

It becomes important for caching, as we want to remember ‘not found’ differently to ‘it blew up’.

I suspect you can get an internal failure by throwing an exception out of the VirtualFileSystemView.

Ok, I will change SFTPServer to use a VirtualFileSystemView from a field so that a different instance can be provided on a per test basis.

I’d expect we’d end up with something like the HttpServer fixture, which allows the test to set certain expectations about what requests will be made, and also allows the test to inject failures.


--
Adam Murdoch
Gradle Co-founder
http://www.gradle.org
VP of Engineering, Gradleware Inc. - Gradle Training, Support, Consulting
http://www.gradleware.com

Join us for Gradle Summit 2014, June 12th and 13th in Santa Clara, CA: http://www.gradlesummit.com




--
Adam Murdoch
Gradle Co-founder
http://www.gradle.org
VP of Engineering, Gradleware Inc. - Gradle Training, Support, Consulting
http://www.gradleware.com

Join us for Gradle Summit 2014, June 12th and 13th in Santa Clara, CA: http://www.gradlesummit.com

Reply | Threaded
Open this post in threaded view
|

Re: Contribution request: add support for SFtp and SSH for publishing and resolving

adrianbk
In reply to this post by Adam Murdoch
Hi,

I'm keen to have a go at supporting maven publishing and resolving to/from AWS S3. I'm wondering if I should follow the approach taken by @erdi to support SFTP? I'm thinking of adding a dependency on the AWS SDK for Java which would take care of the actual file transfers. What may be a bit challenging is how to support authentication via both AWS access keys and IAM roles (similar to how https://github.com/spring-projects/aws-maven works) i.e. what the user facing dsl would look like - it's not really username/password like the other transports.

Wondering if you guys think this is worthwhile or have any pointers?
Reply | Threaded
Open this post in threaded view
|

Re: Contribution request: add support for SFtp and SSH for publishing and resolving

Adam Murdoch

On 1 Sep 2014, at 8:34 pm, adrianbk <[hidden email]> wrote:

Hi,

I'm keen to have a go at supporting maven publishing and resolving to/from
AWS S3. I'm wondering if I should follow the approach taken by @erdi to
support SFTP? I'm thinking of adding a dependency on the AWS SDK for Java
which would take care of the actual file transfers. What may be a bit
challenging is how to support authentication via both AWS access keys and
IAM roles (similar to how https://github.com/spring-projects/aws-maven
works) i.e. what the user facing dsl would look like - it's not really
username/password like the other transports.

Wondering if you guys think this is worthwhile or have any pointers?

Definitely worthwhile, I think.

As far as the DSL goes, we want an approach that will work with any transport that needs or supports credentials other than username + password. There are a few aspects here:

1. What type of credentials are required? eg username + password, client cert protected with a passphrase, NTLM, etc
2. Where should the credentials come from? eg provided by build logic, from ~/.ssh, entered by user on command-line, the os keychain, etc
3. For those transports that support several different types of credentials, some priority list of credentials to try.
4. Is the server trusted?

To get started, we don’t need to implement all this, but we should consider it when putting the DSL together. I’d say it would be sufficient just to have a way to do #1 and leave all the others for later.

The DSL should strongly type the various kinds of credentials. We have `PasswordCredentials`, so we’d add (say) `AwsCredentials`.

Here are a few options for the DSL:

1. use the `credentials` method and declare the type:

repositories {
maven {
url ‘s3:...'
credentials(AwsCredentials) {
accessKey = ‘…'
secretKey = ‘…'
}
}
}

The type would default to `PasswordCredentials`

2. new methods for each kind of credentials:

repositories {
maven {
url: ‘s3:…'
awsCredentials {
accessKey = ‘…'
secretKey = ‘…'
}
}
}

In either case, we could have the credentials method replace the existing credentials.

Alternatively, the method could add a new set of credentials to attempt to use. They’d be attempted in the order added. For example:

repositories {
maven {
url: ‘sftp:…'
credentials {
username = ‘...’
password = ‘...'
}
sshCredentials {
passphrase = ‘...' 
}
}
}

3. A container of credentials:

repositories {
maven {
url: ‘s3:…'
authentication { // this is a container of Credentials instances, similar to `repositories` container
aws {
// adds an `AwsCredentials` instance
...
}
}
}
}

Whichever option we go with, we’d also need to do something with the `getCredentials()` method as it is currently declared to return a `PasswordCredentials` instance.


--
Adam Murdoch
Gradle Co-founder
http://www.gradle.org
CTO Gradleware Inc. - Gradle Training, Support, Consulting
http://www.gradleware.com



Reply | Threaded
Open this post in threaded view
|

Re: Contribution request: add support for SFtp and SSH for publishing and resolving

adrianbk
I'm leaning towards #2 to start with:

repositories {
        maven {
                url: ‘s3:…'
                awsCredentials {
                        accessKey = ‘…'
                        secretKey = ‘…'
                        useInstanceMetaData = false //Defaults to true
                }
        }
}
Reply | Threaded
Open this post in threaded view
|

Re: Contribution request: add support for SFtp and SSH for publishing and resolving

johnrengelman
My personal opinion is that the #1 or #3 is the better API. They would allow extending the credential system externally much easier. 
— John


On Wed, Sep 3, 2014 at 8:43 AM, adrianbk [via Gradle] <[hidden email]> wrote:

I'm leaning towards #2 to start with:

repositories {
        maven {
                url: ‘s3:…'
                awsCredentials {
                        accessKey = ‘…'
                        secretKey = ‘…'
                        useInstanceMetaData = false //Defaults to true
                }
        }
}



To start a new topic under gradle-dev, email [hidden email]
To unsubscribe from gradle-dev, click here.
NAML

Reply | Threaded
Open this post in threaded view
|

Re: Contribution request: add support for SFtp and SSH for publishing and resolving

adrianbk
I took an initial stab at starting with the DSL:

credentials(AwsCredentials){
...
}

My fork is here: https://github.com/adrianbk/gradle/tree/maven-aws-s3

I was wondering if one of the devs could take a look? A couple of things feel a bit nasty and wanted to get some pointers.

If you take a look at RepositoryCredentialsDslIntegrationTest the scenario 'passwordTyped' fails because I can't specify specific enough 'public void credentials(..)' methods in AbstractAuthenticationSupportedRepository to support different implementations of RepositoryCredentials. It feels a bit like a hack to get around how the ConfigureUtil/delegate stuff works. Wondering if there is a better approach or an example of how to use configuration closures with specific types in DSL's.


Reply | Threaded
Open this post in threaded view
|

Re: Contribution request: add support for SFtp and SSH for publishing and resolving

johnrengelman
I was thinking about this and I was wondering if it would make more sense to push the backing type further up the DSL. So instead of specifying the type at the credential method, you would specify it at the repository declaration:

repositories {
  maven(S3Maven) {
    bucket = 'mybucket'
    credentials {
      accessKey = 'key'
      secretKey = 'secret'
    }
  }
}

This way, then the DSL can be evaluated for the proper methods and types for that specific backing implementation. 

So in this case, the normal maven {} and ivy {} methods would use a default http/https a implementation. 

Thoughts?
— John


On Sat, Sep 6, 2014 at 12:21 PM, adrianbk [via Gradle] <[hidden email]> wrote:

I took an initial stab at starting with the DSL:

credentials(AwsCredentials){
...
}

My fork is here: https://github.com/adrianbk/gradle/tree/maven-aws-s3

I was wondering if one of the devs could take a look? A couple of things feel a bit nasty and wanted to get some pointers.

If you take a look at RepositoryCredentialsDslIntegrationTest the scenario 'passwordTyped' fails because I can't specify specific enough 'public void credentials(..)' methods in AbstractAuthenticationSupportedRepository to support different implementations of RepositoryCredentials. It feels a bit like a hack to get around how the ConfigureUtil/delegate stuff works. Wondering if there is a better approach or an example of how to use configuration closures with specific types in DSL's.





To start a new topic under gradle-dev, email [hidden email]
To unsubscribe from gradle-dev, click here.
NAML

Reply | Threaded
Open this post in threaded view
|

Re: Contribution request: add support for SFtp and SSH for publishing and resolving

Adam Murdoch
In reply to this post by adrianbk

On 7 Sep 2014, at 3:21 am, adrianbk <[hidden email]> wrote:

I took an initial stab at starting with the DSL:

credentials(AwsCredentials){
...
}

My fork is here: https://github.com/adrianbk/gradle/tree/maven-aws-s3

I was wondering if one of the devs could take a look? A couple of things
feel a bit nasty and wanted to get some pointers.

Thanks for working on this. I’ve had a look at your changes and added some comments.


--
Adam Murdoch
Gradle Co-founder
http://www.gradle.org
CTO Gradleware Inc. - Gradle Training, Support, Consulting
http://www.gradleware.com



Reply | Threaded
Open this post in threaded view
|

Re: Contribution request: add support for SFtp and SSH for publishing and resolving

adrianbk
Thanks, I updated with your recommendations (https://github.com/adrianbk/gradle/tree/maven-aws-s3). Again, open to any suggestions.

I was wondering about the ignored test - "should configure alternative credentials" in AbstractAuthenticationSupportedRepositorySpec, is it possible to test DSL closures like that at the unit level or how to go about hooking into the magic enhancement Gradle is doing.

@johnrengelman - I'd like to get going with the simpler DSL for now and tackle the maven/s3 transport stuff. As I figure out how all this hangs together and get my head around the code base I could be in a better position to attempt your last suggestion.
Reply | Threaded
Open this post in threaded view
|

Re: Contribution request: add support for SFtp and SSH for publishing and resolving

Adam Murdoch
In reply to this post by johnrengelman

On 7 Sep 2014, at 4:03 am, johnrengelman <[hidden email]> wrote:

I was thinking about this and I was wondering if it would make more sense to push the backing type further up the DSL. So instead of specifying the type at the credential method, you would specify it at the repository declaration:

repositories {
  maven(S3Maven) {
    bucket = 'mybucket'
    credentials {
      accessKey = 'key'
      secretKey = 'secret'
    }
  }
}

This way, then the DSL can be evaluated for the proper methods and types for that specific backing implementation. 

So in this case, the normal maven {} and ivy {} methods would use a default http/https a implementation. 

Thoughts?

It’s certainly an option.

This idea of being able to statically declare certain things about something in the model (a repository, for example) has been surfacing a bit recently. I reckon there’s something in it.

There are a few implications of a DSL like the above.

You could think of a repository definition as made up of a few things:

- The layout, or naming scheme for finding things.
- The meta-data scheme or schemes, eg pom, ivy, no meta-data, etc.
- The transport, or how to access the things in the repository.
- The credentials

These things are somewhat independent, so that you should be able to specify pretty much any combination of these things. Not every combination makes sense, at least out of the box. But most (or many) do.

We want to make these things extensible, so that I can define my own implementations of one of them, but reuse some of the other pieces. For example, I want to be able to define a new transport that I can combine with (say) Maven layout and meta-data schemes and (say) x509 cert credentials. Or add a custom authentication scheme to the HTTPS transports and use it to access my Ivy repository.

We also want to be able to reuse the credentials and transports for things other then accessing repositories. For example, in the wrapper/tooling api to fetch the Gradle runtime, or in the copy tasks to access remote resources, or in the new plugin DSL, etc.

One issue with something like using a `S3Maven` type is that it binds together the transport, credentials, format and meta-data scheme into a single static thing.

This isn’t necessarily a problem. The reality is that there are some very common combinations that make sense together and I, as the build script author, shouldn’t have to define all the pieces each time I point my build at some repository.

So, if using `S3Maven` were a convenience for assembling 3 or 4 pieces together, but there were some way to compose a repository definition from the smaller pieces, then this would be a reasonable approach.

One tweak would be to let you mix in a few different types for a repository definition:

repositories {
maven(Maven, S3) {
assert delegate instanceof Maven
assert delegate instanceof S3

// transport specific properties to specify the target
bucket = ‘…’

credentials {
assert delegate instanceof AwcCredentials

accessKey = ‘…'
secretKey = ‘...'
}

// layout scheme specific properties
uniqueSnapshots = false
        }
}

This way, you can compose repository layout and transport.

We might also push things in the other direction. The static repository type could also imply not just the possible types of properties you can provide for the repo, but also some canned values.

For example, we might have types like `MavenCentral`, `MavenLocal`, `JCenter`, `GradlePlugins` and so on, that imply not just the layout and transport, but also the URL where the repo is, and other settings. You might also be able to define your own types: `MyOrgPlugins`, `MyOrgReleases` etc.

There are a couple of other issues, given a definition like this:

repositories {
someName(Maven, Http) {
}
}

Firstly, the transport can almost always be inferred from the url, which I have to specify as well as the transport type. So, in some ways it would be good to be able to do this:

repositories {
someName(Maven) {

// implied from the URL 
assert delegate instanceof HttpRepository

// http transport specific properties
...
}
}

Another issue is that the name is often meaningless, at least to the build author. It would be nice to have a DSL where I didn’t need to give the thing a name if I don't care:

repositories {
repo(Maven, MyCustomTransport) {
url = ‘…'
}
repo(MavenCentral)
}

// They still have a name - it is assigned some default value
repositories.all { assert it.name != null }

There are many ways we could structure a DSL:

repositories {
repo(<types>) { // keyword could be ‘add’ or ‘define' instead of ‘repo'
<settings>
}
}

repositories {
<name>(<types>) {
<settings>
}
}

repositories {
repo(<types-or-identity-settings>) {
<settings>
}
repo(Maven, url: ‘http://..', name: ‘somename’) {
// also implicitly an Http repo, inferred from the URL
}
repo(Maven, Http) {
url = ‘...'
}
}

repositories {
repo {
<types-and-identity-settings>
<other-settings>
}
repo {
type = MavenCentral
}
repo {
type = Maven
url = ‘s3:…’ // Also implicitly an S3 repo
credentials { accessKey = ‘…’; }
}
repo {
type = Maven, Http
name = ‘someName
url = ‘...'
}
}

There will be heaps more options, too, I imagine.


— John


On Sat, Sep 6, 2014 at 12:21 PM, adrianbk [via Gradle] <<a href="x-msg://72/user/SendEmail.jtp?type=node&amp;node=5713127&amp;i=0" target="_top" rel="nofollow" link="external">[hidden email]> wrote:

I took an initial stab at starting with the DSL:

credentials(AwsCredentials){
...
}

My fork is here: https://github.com/adrianbk/gradle/tree/maven-aws-s3

I was wondering if one of the devs could take a look? A couple of things feel a bit nasty and wanted to get some pointers.

If you take a look at RepositoryCredentialsDslIntegrationTest the scenario 'passwordTyped' fails because I can't specify specific enough 'public void credentials(..)' methods in AbstractAuthenticationSupportedRepository to support different implementations of RepositoryCredentials. It feels a bit like a hack to get around how the ConfigureUtil/delegate stuff works. Wondering if there is a better approach or an example of how to use configuration closures with specific types in DSL's.





To start a new topic under gradle-dev, email <a href="x-msg://72/user/SendEmail.jtp?type=node&amp;node=5713127&amp;i=1" target="_top" rel="nofollow" link="external">[hidden email]
To unsubscribe from gradle-dev, click here.
NAML



View this message in context: Re: Contribution request: add support for SFtp and SSH for publishing and resolving
Sent from the gradle-dev mailing list archive at Nabble.com.


--
Adam Murdoch
Gradle Co-founder
http://www.gradle.org
CTO Gradleware Inc. - Gradle Training, Support, Consulting
http://www.gradleware.com



Reply | Threaded
Open this post in threaded view
|

Re: Contribution request: add support for SFtp and SSH for publishing and resolving

Luke Daley-2
Starting with a DSL here seems risky to me. I’d start with a Java builder API that supports the kind of composition Adam is talking about and then work back to something more sugary if necessary, or provide the sugar as wrappers over the builder API for common cases. 

On 8 September 2014 at 9:53:48 am, Adam Murdoch ([hidden email]) wrote:


On 7 Sep 2014, at 4:03 am, johnrengelman <[hidden email]> wrote:

I was thinking about this and I was wondering if it would make more sense to push the backing type further up the DSL. So instead of specifying the type at the credential method, you would specify it at the repository declaration:

repositories {
  maven(S3Maven) {
    bucket = 'mybucket'
    credentials {
      accessKey = 'key'
      secretKey = 'secret'
    }
  }
}

This way, then the DSL can be evaluated for the proper methods and types for that specific backing implementation. 

So in this case, the normal maven {} and ivy {} methods would use a default http/https a implementation. 

Thoughts?

It’s certainly an option.

This idea of being able to statically declare certain things about something in the model (a repository, for example) has been surfacing a bit recently. I reckon there’s something in it.

There are a few implications of a DSL like the above.

You could think of a repository definition as made up of a few things:

- The layout, or naming scheme for finding things.
- The meta-data scheme or schemes, eg pom, ivy, no meta-data, etc.
- The transport, or how to access the things in the repository.
- The credentials

These things are somewhat independent, so that you should be able to specify pretty much any combination of these things. Not every combination makes sense, at least out of the box. But most (or many) do.

We want to make these things extensible, so that I can define my own implementations of one of them, but reuse some of the other pieces. For example, I want to be able to define a new transport that I can combine with (say) Maven layout and meta-data schemes and (say) x509 cert credentials. Or add a custom authentication scheme to the HTTPS transports and use it to access my Ivy repository.

We also want to be able to reuse the credentials and transports for things other then accessing repositories. For example, in the wrapper/tooling api to fetch the Gradle runtime, or in the copy tasks to access remote resources, or in the new plugin DSL, etc.

One issue with something like using a `S3Maven` type is that it binds together the transport, credentials, format and meta-data scheme into a single static thing.

This isn’t necessarily a problem. The reality is that there are some very common combinations that make sense together and I, as the build script author, shouldn’t have to define all the pieces each time I point my build at some repository.

So, if using `S3Maven` were a convenience for assembling 3 or 4 pieces together, but there were some way to compose a repository definition from the smaller pieces, then this would be a reasonable approach.

One tweak would be to let you mix in a few different types for a repository definition:

repositories {
maven(Maven, S3) {
assert delegate instanceof Maven
assert delegate instanceof S3

// transport specific properties to specify the target
bucket = ‘…’

credentials {
assert delegate instanceof AwcCredentials

accessKey = ‘…'
secretKey = ‘...'
}

// layout scheme specific properties
uniqueSnapshots = false
        }
}

This way, you can compose repository layout and transport.

We might also push things in the other direction. The static repository type could also imply not just the possible types of properties you can provide for the repo, but also some canned values.

For example, we might have types like `MavenCentral`, `MavenLocal`, `JCenter`, `GradlePlugins` and so on, that imply not just the layout and transport, but also the URL where the repo is, and other settings. You might also be able to define your own types: `MyOrgPlugins`, `MyOrgReleases` etc.

There are a couple of other issues, given a definition like this:

repositories {
someName(Maven, Http) {
}
}

Firstly, the transport can almost always be inferred from the url, which I have to specify as well as the transport type. So, in some ways it would be good to be able to do this:

repositories {
someName(Maven) {

// implied from the URL 
assert delegate instanceof HttpRepository

// http transport specific properties
...
}
}

Another issue is that the name is often meaningless, at least to the build author. It would be nice to have a DSL where I didn’t need to give the thing a name if I don't care:

repositories {
repo(Maven, MyCustomTransport) {
url = ‘…'
}
repo(MavenCentral)
}

// They still have a name - it is assigned some default value
repositories.all { assert it.name != null }

There are many ways we could structure a DSL:

repositories {
repo(<types>) { // keyword could be ‘add’ or ‘define' instead of ‘repo'
<settings>
}
}

repositories {
<name>(<types>) {
<settings>
}
}

repositories {
repo(<types-or-identity-settings>) {
<settings>
}
repo(Maven, url: ‘http://..', name: ‘somename’) {
// also implicitly an Http repo, inferred from the URL
}
repo(Maven, Http) {
url = ‘...'
}
}

repositories {
repo {
<types-and-identity-settings>
<other-settings>
}
repo {
type = MavenCentral
}
repo {
type = Maven
url = ‘s3:…’ // Also implicitly an S3 repo
credentials { accessKey = ‘…’; }
}
repo {
type = Maven, Http
name = ‘someName
url = ‘...'
}
}

There will be heaps more options, too, I imagine.


— John


On Sat, Sep 6, 2014 at 12:21 PM, adrianbk [via Gradle] <<a href="x-msg://72/user/SendEmail.jtp?type=node&amp;node=5713127&amp;i=0" target="_top" rel="nofollow" link="external">[hidden email]> wrote:

I took an initial stab at starting with the DSL:

credentials(AwsCredentials){
...
}

My fork is here: https://github.com/adrianbk/gradle/tree/maven-aws-s3

I was wondering if one of the devs could take a look? A couple of things feel a bit nasty and wanted to get some pointers.

If you take a look at RepositoryCredentialsDslIntegrationTest the scenario 'passwordTyped' fails because I can't specify specific enough 'public void credentials(..)' methods in AbstractAuthenticationSupportedRepository to support different implementations of RepositoryCredentials. It feels a bit like a hack to get around how the ConfigureUtil/delegate stuff works. Wondering if there is a better approach or an example of how to use configuration closures with specific types in DSL's.





To start a new topic under gradle-dev, email <a href="x-msg://72/user/SendEmail.jtp?type=node&amp;node=5713127&amp;i=1" target="_top" rel="nofollow" link="external">[hidden email]
To unsubscribe from gradle-dev, click here.
NAML



View this message in context: Re: Contribution request: add support for SFtp and SSH for publishing and resolving
Sent from the gradle-dev mailing list archive at Nabble.com.


--
Adam Murdoch
Gradle Co-founder
http://www.gradle.org
CTO Gradleware Inc. - Gradle Training, Support, Consulting
http://www.gradleware.com




Reply | Threaded
Open this post in threaded view
|

Re: Contribution request: add support for SFtp and SSH for publishing and resolving

desavera
This post has NOT been accepted by the mailing list yet.
In reply to this post by Adam Murdoch
Just wondering if several testing strategies would be combined somehow using a Decorator implementation. Just a suggestion ... that way we could statically define some interesting levels of testing to be raised or lowered according to the testing level desired. Just like a Logging levels mechanism ...
12