11/25/2023 0 Comments Instal Start with Why![]() # Start from the home directory of user account "dataiku" # which will be used to run the Dataiku DSS # We will install DSS using data directory: /home/dataiku/dss_data Unpack the tar.gz in the location you have chosen for the installation directory. Having foreign mounts within the data directory, or symlinking parts of the data directory to foreign mounts is not supported. The data directory should be entirely contained within a single mount and be a regular folder. It is highly recommended that you reserve at least 100 GB of space for the data directory. Managed datasets can also be created outside of the data directory with some additional configuration. The configuration of Dataiku DSS, including all user-generated configuration (datasets, recipes, insights, models, …)Ī Python virtual environment dedicated to running the Python components of Dataiku DSS, including any user-installedĭataiku DSS startup and shutdown scripts and command-line toolsĭepending on your configuration, the data directory can also contain some managed datasets. ![]() The data directory (which will later be named “DATA_DIR”). This is the directory where the Dataiku DSS tarball is unzipped (denoted as “INSTALL_DIR”) ![]() The installation directory, which contains the code of Dataiku DSS. To make sure that you meet the installation Requirements.Ī Dataiku DSS installation spans over two folders: Other installation options are available (Dataiku Cloud Stacks, macOS, Windows, AWS sandbox, Azure sandbox, or Virtual Machine). This is the documentation to perform a Custom Dataiku install of a new Dataiku DSS instance on a Linux server Setting up DSS item exports to PDF or images.Setting up Hadoop and Spark integration.API Node & API Deployer: Real-time APIs.Automation scenarios, metrics, and checks.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |