Mars - 阿里开源的一个基于张量的大规模数据计算的统一框架

网友投稿 919 2022-11-05 23:07:00

Mars - 阿里开源的一个基于张量的大规模数据计算的统一框架

Mars is a tensor-based unified framework for large-scale data computation which scales Numpy, Pandas and Scikit-learn.

Documentation, 中文文档

Installation

Mars is easy to install by

pip install pymars

When you need to install dependencies needed by the distributed version, you can use the command below.

pip install 'pymars[distributed]'

For now, distributed version is only available on Linux and Mac OS.

Developer Install

When you want to contribute code to Mars, you can follow the instructions below to install Mars for development:

git clone https://github.com/mars-project/mars.gitcd marspip install -e ".[dev]"

More details about installing Mars can be found at getting started section in Mars document.

Mars tensor

Mars tensor provides a familiar interface like Numpy.

NumpyMars tensor
import numpy as npN = 200_000_000a = np.random.uniform(-1, 1, size=(N, 2))print((np.linalg.norm(a, axis=1) < 1)      .sum() * 4 / N)
import mars.tensor as mtN = 200_000_000a = mt.random.uniform(-1, 1, size=(N, 2))print(((mt.linalg.norm(a, axis=1) < 1)        .sum() * 4 / N).execute())
3.14151712CPU times: user 12.5 s, sys: 7.16 s,           total: 19.7 sWall time: 21.8 s
3.14161908CPU times: user 17.5 s, sys: 3.56 s,           total: 21.1 sWall time: 5.59 s

Mars can leverage multiple cores, even on a laptop, and could be even faster for a distributed setting.

Mars DataFrame

Mars DataFrame provides a familiar interface like pandas.

PandasMars DataFrame
import numpy as npimport pandas as pddf = pd.DataFrame(    np.random.rand(100000000, 4),    columns=list('abcd'))print(df.sum())
import mars.tensor as mtimport mars.dataframe as mddf = md.DataFrame(    mt.random.rand(100000000, 4),    columns=list('abcd'))print(df.sum().execute())
CPU times: user 10.9 s, sys: 2.69 s,           total: 13.6 sWall time: 11 s
CPU times: user 16.5 s, sys: 3.52 s,           total: 20 sWall time: 3.6 s

Mars learn

Mars learn provides a familiar interface like scikit-learn.

Scikit-learnMars learn
from sklearn.datasets import make_blobsfrom sklearn.decomposition import PCAX, y = make_blobs(    n_samples=100000000, n_features=3,    centers=[[3, 3, 3], [0, 0, 0],             [1, 1, 1], [2, 2, 2]],    cluster_std=[0.2, 0.1, 0.2, 0.2],    random_state=9)pca = PCA(n_components=3)pca.fit(X)print(pca.explained_variance_ratio_)print(pca.explained_variance_)
from mars.learn.datasets import make_blobsfrom mars.learn.decomposition import PCAX, y = make_blobs(    n_samples=100000000, n_features=3,    centers=[[3, 3, 3], [0, 0, 0],              [1, 1, 1], [2, 2, 2]],    cluster_std=[0.2, 0.1, 0.2, 0.2],    random_state=9)pca = PCA(n_components=3)pca.fit(X)print(pca.explained_variance_ratio_)print(pca.explained_variance_)

Mars remote

Mars remote allows users to execute functions in parallel.

Vanilla function callsMars remote
import numpy as npdef calc_chunk(n, i):    rs = np.random.RandomState(i)    a = rs.uniform(-1, 1, size=(n, 2))    d = np.linalg.norm(a, axis=1)    return (d < 1).sum()def calc_pi(fs, N):    return sum(fs) * 4 / NN = 200_000_000n = 10_000_000fs = [calc_chunk(n, i)      for i in range(N // n)]pi = calc_pi(fs, N)print(pi)
import numpy as npimport mars.remote as mrdef calc_chunk(n, i):    rs = np.random.RandomState(i)    a = rs.uniform(-1, 1, size=(n, 2))    d = np.linalg.norm(a, axis=1)    return (d < 1).sum()def calc_pi(fs, N):    return sum(fs) * 4 / NN = 200_000_000n = 10_000_000fs = [mr.spawn(calc_chunk, args=(n, i))      for i in range(N // n)]pi = mr.spawn(calc_pi, args=(fs, N))print(pi.execute().fetch())
3.1416312CPU times: user 32.2 s, sys: 4.86 s,           total: 37.1 sWall time: 12.4 s
3.1416312CPU times: user 16.9 s, sys: 5.46 s,           total: 22.3 sWall time: 4.83 s

Eager Mode

Mars supports eager mode which makes it friendly for developing and easy to debug.

Users can enable the eager mode by options, set options at the beginning of the program or console session.

>>> from mars.config import options>>> options.eager_mode = True

Or use a context.

>>> from mars.config import option_context>>> with option_context() as options:>>> options.eager_mode = True>>> # the eager mode is on only for the with statement>>> ...

If eager mode is on, tensor, DataFrame etc will be executed immediately by default session once it is created.

>>> import mars.tensor as mt>>> import mars.dataframe as md>>> from mars.config import options>>> options.eager_mode = True>>> t = mt.arange(6).reshape((2, 3))>>> tarray([[0, 1, 2], [3, 4, 5]])>>> df = md.DataFrame(t)>>> df.sum()0 31 52 7dtype: int64

Easy to scale in and scale out

Mars can scale in to a single machine, and scale out to a cluster with thousands of machines. Both the local and distributed version share the same piece of code, it's fairly simple to migrate from a single machine to a cluster due to the increase of data.

Running on a single machine including thread-based scheduling, local cluster scheduling which bundles the whole distributed components. Mars is also easy to scale out to a cluster by starting different components of mars distributed runtime on different machines in the cluster.

Threaded

execute method will by default run on the thread-based scheduler on a single machine.

>>> import mars.tensor as mt>>> a = mt.ones((10, 10))>>> a.execute()

Users can create a session explicitly.

>>> from mars.session import new_session>>> session = new_session()>>> (a * 2).execute(session=session)>>> # session will be released when out of with statement>>> with new_session() as session2:>>> (a / 3).execute(session=session2)

Local cluster

Users can start the local cluster bundled with the distributed runtime on a single machine. Local cluster mode requires mars distributed version.

>>> from mars.deploy.local import new_cluster>>> # cluster will create a session and set it as default>>> cluster = new_cluster()>>> # run on the local cluster>>> (a + 1).execute()>>> # create a session explicitly by specifying the cluster's endpoint>>> session = new_session(cluster.endpoint)>>> (a * 3).execute(session=session)

Distributed

After installing the distributed version on every node in the cluster, A node can be selected as scheduler and another as web service, leaving other nodes as workers. The scheduler can be started with the following command:

mars-scheduler -a -p

Web service can be started with the following command:

mars-web -a -s --ui-port

Workers can be started with the following command:

mars-worker -a -p -s

After all mars processes are started, users can run

>>> sess = new_session('http://:')>>> a = mt.ones((2000, 2000), chunk_size=200)>>> b = mt.inner(a, a)>>> b.execute(session=sess)

Getting involved

Read contribution guide.Join the mailing list: send an email to mars-dev@googlegroups.com.Please report bugs by submitting a GitHub issue.Submit contributions using pull requests.

Thank you in advance for your contributions!

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:小程序回退刷新操作
下一篇:layerconfirm 右上角关闭事件等于按钮2
相关文章