Skip to content

Commit

Permalink
Replace tarantoolctl with tt (CE part) (#3706)
Browse files Browse the repository at this point in the history
Resolves #3501 

Also: fixes build warnings

Co-authored-by: Kseniia Antonova <[email protected]>

Co-authored-by: Andrey Aksenov <[email protected]>
  • Loading branch information
p7nov and andreyaksenov authored Sep 21, 2023
1 parent 2824070 commit 41aaa14
Show file tree
Hide file tree
Showing 47 changed files with 600 additions and 378 deletions.
2 changes: 1 addition & 1 deletion doc/archive/shard.rst
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ To acquire it, do a separate installation:

.. code-block:: console
$ tarantoolctl rocks install shard
$ tt rocks install shard
* install with `yum` or `apt`, for example on Ubuntu say:

Expand Down
30 changes: 11 additions & 19 deletions doc/book/admin/daemon_supervision.rst
Original file line number Diff line number Diff line change
Expand Up @@ -57,12 +57,11 @@ an instance:
$ systemctl status tarantool@my_app|grep PID
Main PID: 5885 (tarantool)
$ tarantoolctl enter my_app
/bin/tarantoolctl: Found my_app.lua in /etc/tarantool/instances.available
/bin/tarantoolctl: Connecting to /var/run/tarantool/my_app.control
/bin/tarantoolctl: connected to unix/:/var/run/tarantool/my_app.control
unix/:/var/run/tarantool/my_app.control> os.exit(-1)
/bin/tarantoolctl: unix/:/var/run/tarantool/my_app.control: Remote host closed connection
$ tt connect my_app
• Connecting to the instance...
• Connected to /var/run/tarantool/my_app.control
/var/run/tarantool/my_app.control> os.exit(-1)
⨯ Connection was closed. Probably instance process isn't running anymore
Now let’s make sure that ``systemd`` has restarted the instance:

Expand All @@ -71,20 +70,11 @@ Now let’s make sure that ``systemd`` has restarted the instance:
$ systemctl status tarantool@my_app|grep PID
Main PID: 5914 (tarantool)
Finally, let’s check the boot logs:
Additionally, you can find the information about the instance restart in the boot logs:

.. code-block:: console
$ journalctl -u tarantool@my_app -n 8
-- Logs begin at Fri 2016-01-08 12:21:53 MSK, end at Thu 2016-01-21 21:09:45 MSK. --
Jan 21 21:09:45 localhost.localdomain systemd[1]: tarantool@my_app.service: Unit entered failed state.
Jan 21 21:09:45 localhost.localdomain systemd[1]: tarantool@my_app.service: Failed with result 'exit-code'.
Jan 21 21:09:45 localhost.localdomain systemd[1]: tarantool@my_app.service: Service hold-off time over, scheduling restart.
Jan 21 21:09:45 localhost.localdomain systemd[1]: Stopped Tarantool Database Server.
Jan 21 21:09:45 localhost.localdomain systemd[1]: Starting Tarantool Database Server...
Jan 21 21:09:45 localhost.localdomain tarantoolctl[5910]: /usr/bin/tarantoolctl: Found my_app.lua in /etc/tarantool/instances.available
Jan 21 21:09:45 localhost.localdomain tarantoolctl[5910]: /usr/bin/tarantoolctl: Starting instance...
Jan 21 21:09:45 localhost.localdomain systemd[1]: Started Tarantool Database Server.
.. _admin-core_dumps:

Expand Down Expand Up @@ -118,9 +108,11 @@ instance:
.. code-block:: console
$ # !!! please never do this on a production system !!!
$ tarantoolctl enter my_app
unix/:/var/run/tarantool/my_app.control> require('ffi').cast('char *', 0)[0] = 48
/bin/tarantoolctl: unix/:/var/run/tarantool/my_app.control: Remote host closed connection
$ tt connect my_app
• Connecting to the instance...
• Connected to /var/run/tarantool/my_app.control
/var/run/tarantool/my_app.control> require('ffi').cast('char *', 0)[0] = 48
⨯ Connection was closed. Probably instance process isn't running anymore
Alternatively, if you know the process ID of the instance (here we refer to it
as $PID), you can abort a Tarantool instance by running ``gdb`` debugger:
Expand Down
6 changes: 3 additions & 3 deletions doc/book/admin/disaster_recovery.rst
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ transferred to the replica before crash. If you were able to salvage the master

.. code-block:: console
$ tarantoolctl <new_master_uri> <xlog_file> play --from 23425 --replica 1
$ tt play <new_master_uri> <xlog_file> --from 23425 --replica 1
.. _admin-disaster_recovery-master_master:

Expand Down Expand Up @@ -118,9 +118,9 @@ Your actions:
made with older checkpoints until :doc:`/reference/reference_lua/box_backup/stop` is called.

2. Get the latest valid :ref:`.snap file <internals-snapshot>` and
use ``tarantoolctl cat`` command to calculate at which lsn the data loss occurred.
use ``tt cat`` command to calculate at which lsn the data loss occurred.

3. Start a new instance (instance#1) and use ``tarantoolctl play`` command to
3. Start a new instance (instance#1) and use ``tt play`` command to
play to it the contents of .snap/.xlog files up to the calculated lsn.

4. Bootstrap a new replica from the recovered master (instance#1).
6 changes: 2 additions & 4 deletions doc/book/admin/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,18 +3,16 @@

.. _admin:

********************************************************************************
Administration
********************************************************************************
==============

Tarantool is designed to have multiple running instances on the same host.

Here we show how to administer Tarantool instances using any of the following
utilities:

* ``systemd`` native utilities, or
* :ref:`tarantoolctl <tarantoolctl>`, an administrative utility shipped and installed as
part of Tarantool distribution.
* :ref:`tt <tt-cli>`, a command-line utility for managing Tarantool-based applications.

.. NOTE::

Expand Down
97 changes: 19 additions & 78 deletions doc/book/admin/instance_config.rst
Original file line number Diff line number Diff line change
Expand Up @@ -28,8 +28,7 @@ For each Tarantool instance, you need two files:
* An :ref:`instance file <admin-instance_file>` with
instance-specific initialization logic and parameters. Put this file, or a
symlink to it, into the **instance directory**
(see :ref:`instance_dir <admin-instance_dir>` parameter in ``tarantoolctl``
configuration file).
(see ``instances_enabled`` parameter in :ref:`tt configuration file <tt-config_file>`).

For example, ``/etc/tarantool/instances.enabled/my_app.lua`` (here we load
``my_app.lua`` module and make a call to ``start()`` function from that
Expand All @@ -53,7 +52,7 @@ Instance file
-------------

After this short introduction, you may wonder what an instance file is, what it
is for, and how ``tarantoolctl`` uses it. After all, Tarantool is an application
is for, and how ``tt`` uses it. After all, Tarantool is an application
server, so why not start the application stored in ``/usr/share/tarantool``
directly?

Expand All @@ -76,7 +75,7 @@ An instance file is designed to not differ in any way from a Lua application.
It must, however, configure the database, i.e. contain a call to
:doc:`box.cfg{} </reference/reference_lua/box_cfg>` somewhere in it, because it’s the
only way to turn a Tarantool script into a background process, and
``tarantoolctl`` is a tool to manage background processes. Other than that, an
``tt`` is a tool to manage background processes. Other than that, an
instance file may contain arbitrary Lua code, and, in theory, even include the
entire application business logic in it. We, however, do not recommend this,
since it clutters the instance file and leads to unnecessary copy-paste when
Expand Down Expand Up @@ -155,82 +154,24 @@ You get the following output:
If an error happens during the execution of the preload script or module, Tarantool
reports the problem and exits.

.. _admin-tarantoolctl_config_file:
.. _admin-tt_config_file:

tarantoolctl configuration file
-------------------------------
tt configuration file
---------------------

While instance files contain instance configuration, the ``tarantoolctl``
configuration file contains the configuration that ``tarantoolctl`` uses to
override instance configuration. In other words, it contains system-wide
configuration defaults. If ``tarantoolctl`` fails to find this file with
the method described in section
:ref:`Starting/stopping an instance <admin-start_stop_instance>`, it uses
default settings.
While instance files contain instance configuration, the :ref:`tt <tt-cli>` configuration file
contains the configuration that ``tt`` uses to set up the application environment.
This includes the path to instance files, various working directories, and other
parameters that connect the application to the system.

Most of the parameters are similar to those used by
:doc:`box.cfg{} </reference/reference_lua/box_cfg>`. Here are the default settings
(possibly installed in ``/etc/default/tarantool`` or ``/etc/sysconfig/tarantool``
as part of Tarantool distribution -- see OS-specific default paths in
:ref:`Notes for operating systems <admin-os_notes>`):
To create a default ``tt`` configuration, run ``tt init``. This creates a ``tt.yaml``
configuration file. Its location depends on the :ref:`tt launch mode <tt-config_modes>`
(system or local).

.. code-block:: lua
default_cfg = {
pid_file = "/var/run/tarantool",
wal_dir = "/var/lib/tarantool",
memtx_dir = "/var/lib/tarantool",
vinyl_dir = "/var/lib/tarantool",
log = "/var/log/tarantool",
username = "tarantool",
language = "Lua",
}
instance_dir = "/etc/tarantool/instances.enabled"
where:

* | ``pid_file``
| Directory for the pid file and control-socket file; ``tarantoolctl`` will
add “/instance_name” to the directory name.
* | ``wal_dir``
| Directory for write-ahead .xlog files; ``tarantoolctl`` will add
"/instance_name" to the directory name.
* | ``memtx_dir``
| Directory for snapshot .snap files; ``tarantoolctl`` will add
"/instance_name" to the directory name.
* | ``vinyl_dir``
| Directory for vinyl files; ``tarantoolctl`` will add "/instance_name" to the
directory name.
* | ``log``
| The place where the application log will go; ``tarantoolctl`` will add
"/instance_name.log" to the name.
* | ``username``
| The user that runs the Tarantool instance. This is the operating-system user
name rather than the Tarantool-client user name. Tarantool will change its
effective user to this user after becoming a daemon.
* | ``language``
| The :ref:`interactive console <interactive_console>` language. Can be either ``Lua`` or ``SQL``.
.. _admin-instance_dir:

* | ``instance_dir``
| The directory where all instance files for this host are stored. Put
instance files in this directory, or create symbolic links.
The default instance directory depends on Tarantool's ``WITH_SYSVINIT``
build option: when ON, it is ``/etc/tarantool/instances.enabled``,
otherwise (OFF or not set) it is ``/etc/tarantool/instances.available``.
The latter case is typical for Tarantool builds for Linux distros with
``systemd``.

To check the build options, say ``tarantool --version``.
Some ``tt`` configuration parameters are similar to those used by
:doc:`box.cfg{} </reference/reference_lua/box_cfg>`, for example, ``memxt_dir``
or ``wal_dir``. Other parameters define the ``tt`` environment, for example,
paths to installation files used by ``tt`` or to connected :ref:`external modules <tt-external_modules>`.

As a full-featured example, you can take
`example.lua <https://github.com/tarantool/tarantool/blob/2.1/extra/dist/example.lua>`_
script that ships with Tarantool and defines all configuration options.
Find the detailed information about the ``tt`` configuration parameters and launch modes
on the :ref:`tt configuration page <tt-config>`.
136 changes: 76 additions & 60 deletions doc/book/admin/logs.rst
Original file line number Diff line number Diff line change
@@ -1,76 +1,92 @@
.. _admin-logs:

================================================================================
Logs
================================================================================
====

Tarantool logs important events to a file, e.g. ``/var/log/tarantool/my_app.log``.
To build the log file path, ``tarantoolctl`` takes the instance name, prepends
the instance directory and appends “.log” extension.
Each Tarantool instance logs important events to its own log file ``<instance-name>.log``.
For instances started with :ref:`tt <tt-cli>`, the log location is defined by
the ``log_dir`` parameter in the :ref:`tt configuration <tt-config>`.
By default, it's ``/var/log/tarantool`` in the ``tt`` :ref:`system mode <tt-config_modes>`,
and the ``var/log/`` subdirectory of the ``tt`` working directory in the :ref:`local mode <tt-config_modes>`.
In the specified location, ``tt`` creates separate directories for each instance's logs.

Let’s write something to the log file:
To check how logging works, write something to the log using the :ref:`log <log-module>` module:

.. code-block:: console
$ tarantoolctl enter my_app
/bin/tarantoolctl: connected to unix/:/var/run/tarantool/my_app.control
unix/:/var/run/tarantool/my_app.control> require('log').info("Hello for the manual readers")
---
...
$ tt connect my_app
• Connecting to the instance...
• Connected to /var/run/tarantool/my_app.control
/var/run/tarantool/my_app.control> require('log').info("Hello for the manual readers")
Then check the logs:

.. code-block:: console
$ tail /var/log/tarantool/my_app.log
2017-04-04 15:54:04.977 [29255] main/101/tarantoolctl C> version 1.7.3-382-g68ef3f6a9
2017-04-04 15:54:04.977 [29255] main/101/tarantoolctl C> log level 5
2017-04-04 15:54:04.978 [29255] main/101/tarantoolctl I> mapping 134217728 bytes for tuple arena...
2017-04-04 15:54:04.985 [29255] iproto/101/main I> binary: bound to [::1]:3301
2017-04-04 15:54:04.986 [29255] main/101/tarantoolctl I> recovery start
2017-04-04 15:54:04.986 [29255] main/101/tarantoolctl I> recovering from `/var/lib/tarantool/my_app/00000000000000000000.snap'
2017-04-04 15:54:04.988 [29255] main/101/tarantoolctl I> ready to accept requests
2017-04-04 15:54:04.988 [29255] main/101/tarantoolctl I> set 'checkpoint_interval' configuration option to 3600
2017-04-04 15:54:04.988 [29255] main/101/my_app I> Run console at unix/:/var/run/tarantool/my_app.control
2017-04-04 15:54:04.989 [29255] main/106/console/unix/:/var/ I> started
2017-04-04 15:54:04.989 [29255] main C> entering the event loop
2017-04-04 15:54:47.147 [29255] main/107/console/unix/: I> Hello for the manual readers
2023-09-12 18:13:00.396 [67173] main/111/guard of feedback_daemon/box.feedback_daemon V> metrics_collector restarted
2023-09-12 18:13:00.396 [67173] main/103/-/box.feedback_daemon V> feedback_daemon started
2023-09-12 18:13:00.396 [67173] main/103/- D> memtx_tuple_new_raw_impl(14) = 0x1090077b4
2023-09-12 18:13:00.396 [67173] main/103/- D> memtx_tuple_new_raw_impl(26) = 0x1090077ec
2023-09-12 18:13:00.396 [67173] main/103/- D> memtx_tuple_new_raw_impl(39) = 0x109007824
2023-09-12 18:13:00.396 [67173] main/103/- D> memtx_tuple_new_raw_impl(24) = 0x10900785c
2023-09-12 18:13:00.396 [67173] main/103/- D> memtx_tuple_new_raw_impl(39) = 0x109007894
2023-09-12 18:13:00.396 [67173] main/106/checkpoint_daemon I> scheduled next checkpoint for Tue Sep 12 19:44:34 2023
2023-09-12 18:13:00.396 [67173] main I> entering the event loop
2023-09-12 18:13:11.656 [67173] main/114/console/unix/:/tarantool I> Hello for the manual readers
.. _admin-logs-rotation:

Log rotation
------------

When :ref:`logging to a file <cfg_logging-log>`, the system administrator must ensure logs are
rotated timely and do not take up all the available disk space. With
``tarantoolctl``, log rotation is pre-configured to use ``logrotate`` program,
which you must have installed.

File ``/etc/logrotate.d/tarantool`` is part of the standard Tarantool
distribution, and you can modify it to change the default behavior. This is what
this file is usually like:

.. code-block:: text
/var/log/tarantool/*.log {
daily
size 512k
missingok
rotate 10
compress
delaycompress
create 0640 tarantool adm
postrotate
/usr/bin/tarantoolctl logrotate `basename ${1%%.*}`
endscript
}
If you use a different log rotation program, you can invoke
``tarantoolctl logrotate`` command to request instances to reopen their log
files after they were moved by the program of your choice.

Tarantool can write its logs to a log file, ``syslog`` or a program specified
in the configuration file (see :ref:`log <cfg_logging-log>` parameter).

By default, logs are written to a file as defined in ``tarantoolctl``
defaults. ``tarantoolctl`` automatically detects if an instance is using
``syslog`` or an external program for logging, and does not override the log
destination in this case. In such configurations, log rotation is usually
handled by the external program used for logging. So,
``tarantoolctl logrotate`` command works only if logging-into-file is enabled
in the instance file.
rotated timely and do not take up all the available disk space.
To prevent log files from growing infinitely, ``tt`` automatically rotates instance
logs. The following ``tt`` configuration parameters define the log rotation:
``log_maxsize`` (in megabytes) and ``log_maxage`` (in days). When any of these
limits is reached, the log is rotated.
Additionally, there is the ``log_maxbackups`` parameter (the number of stored log
files for an instance), which enables automatic removal of old log files.

.. code-block:: yaml
# tt.yaml
tt:
app:
log_maxsize: 100
log_maxage: 3
log_maxbackups: 50
# ...
There is also the :ref:`tt logrotate <tt-logrotate>` command that performs log
rotation on demand.

.. code-block:: bash
tt logrotate my_app
To learn about log rotation in the deprecated ``tarantoolctl`` utility,
check its :ref:`documentation <tarantoolctl-log-rotation>`.


.. _admin-logs-formats:

Log formats
-----------

Tarantool can write its logs to a log file, to ``syslog``, or to a specified program
through a pipe.

File is the default log format for ``tt``. To send logs to a pipe or ``syslog``,
specify the :ref:`box.cfg.log <cfg_logging-log>` parameter, for example:

.. code-block:: lua
box.cfg{log = '| cronolog tarantool.log'}
-- or
box.cfg{log = 'syslog:identity=tarantool,facility=user'}
In such configurations, log rotation is usually handled by the external program
used for logging.
Loading

0 comments on commit 41aaa14

Please sign in to comment.