Extract data from Infor datalake and generate CSV files

Hi , I have a requirement to create job(script) that needs to extract data from Infor datalake and to generate CSV files . Could you help to do with best approach ?

  • You can use ION to extract data from Data Lake and export it as file to folder.

    1. In ION Data lake flow, use Retrieve to extract tables required.
    2. Use any filter/schedule as required
    3. Migrate data to DB/File.
  • I'd recommend checking out page 124 (as of December 30th, 2019) of the Infor ION Development Guide - Cloud Edition. It provides details on available Compass APIs and how you can submit a query, check the status, and retrieve the result set.

    You're able to request either a CSV or ndjson object that the API consumer can then further process. Alternatively, you could use the Compass JDBC driver to retrieve and export a result set.

    The Compass APIs aren't yet compatible with the ION API connector within ION but will be soon as we add enhancements to expand support content request types.

  • In reply to Mike Kalinowski:

    Thank you for the inputs and quick response Mike, Do we have any steps/workflow (document) how to do we connect with Compass JDBC driver?
  • In reply to LawsonS3:

    Absolutely, check out KB 2103864 for more details

  • In reply to Mike Kalinowski:

    Thank you Mike, I can see the documents to connect from sql clients but my requirement to extract data to CSV file. Compass JDBC driver supports python/java/Shell scripts ?

    We may need to extract high volume of data and do we have any limitation with Compass API
  • In reply to LawsonS3:

    Have you considered a back end service to authorize Compass API access, then the correct Compass REST sequence to fetch your data ? text/csv is a parameter content type. You could go with Java, Python, .Net, Javascript.
  • In reply to LawsonS3:

    This particular feature will eventually come in AnySQL on top of Data Lake as a managed ION feature. We're still actively in development on this and hope to have it out in the 2nd quarter this year.

    Where are you trying to deliver the files - file server? API?

    For high volume, do you mean high frequency of calls to the APIs? Or do you mean extraordinarily large data sets? There's no functional limit in terms of how large datasets can be but I will not that we don't currently page the results today. We've recently started getting this request from a few other users and we're looking at when we can start to design and build :)

    I'll keep you updated