forked from pool/python-tldextract
79 lines
2.7 KiB
RPMSpec
79 lines
2.7 KiB
RPMSpec
|
#
|
||
|
# spec file for package python-tldextract
|
||
|
#
|
||
|
# Copyright (c) 2016 SUSE LINUX GmbH, Nuernberg, Germany.
|
||
|
#
|
||
|
# All modifications and additions to the file contributed by third parties
|
||
|
# remain the property of their copyright owners, unless otherwise agreed
|
||
|
# upon. The license for this file, and modifications and additions to the
|
||
|
# file, is the same license as for the pristine package itself (unless the
|
||
|
# license for the pristine package is not an Open Source License, in which
|
||
|
# case the license is the MIT License). An "Open Source License" is a
|
||
|
# license that conforms to the Open Source Definition (Version 1.9)
|
||
|
# published by the Open Source Initiative.
|
||
|
|
||
|
# Please submit bugfixes or comments via http://bugs.opensuse.org/
|
||
|
#
|
||
|
|
||
|
# See also http://en.opensuse.org/openSUSE:Specfile_guidelines
|
||
|
|
||
|
Name: python-tldextract
|
||
|
Version: 2.0.1
|
||
|
Release: 0
|
||
|
Summary: Accurately separate the TLD from the registered domain and subdomains of an URL
|
||
|
License: BSD
|
||
|
Group: Productivity/Networking/DNS/Utilities
|
||
|
Url: https://github.com/john-kurkowski/tldextract
|
||
|
Source0: https://pypi.python.org/packages/f4/fd/f9995517d2fce9b4800680916c8ace079cf6ced8fb7ff84a301105d87668/tldextract-%{version}.tar.gz
|
||
|
BuildArch: noarch
|
||
|
BuildRoot: %{_tmppath}/%{name}-%{version}-build
|
||
|
|
||
|
BuildRequires: fdupes
|
||
|
BuildRequires: pkg-config
|
||
|
BuildRequires: pkgconfig(python) >= 2.6.6
|
||
|
BuildRequires: python-setuptools
|
||
|
|
||
|
Requires: python-idna >= 2.1.0
|
||
|
Requires: python-requests >= 2.1.0
|
||
|
Requires: python-requests-file >= 1.4
|
||
|
|
||
|
%description
|
||
|
tldextract accurately separates the gTLD or ccTLD (generic or country code
|
||
|
top-level domain) from the registered domain and subdomains of a URL. For
|
||
|
example, say you want just the 'google' part of 'http://www.google.com'.
|
||
|
|
||
|
Everybody gets this wrong. Splitting on the '.' and taking the last 2
|
||
|
elements goes a long way only if you're thinking of simple e.g. .com
|
||
|
domains. Think parsing http://forums.bbc.co.uk for example: the naive
|
||
|
splitting method above will give you 'co' as the domain and 'uk' as the
|
||
|
TLD, instead of 'bbc' and 'co.uk' respectively.
|
||
|
|
||
|
tldextract on the other hand knows what all gTLDs and ccTLDs look like
|
||
|
by looking up the currently living ones according to the Public Suffix
|
||
|
List. So, given a URL, it knows its subdomain from its domain, and its
|
||
|
domain from its country code.
|
||
|
|
||
|
%prep
|
||
|
%setup -q -n tldextract-%{version}
|
||
|
|
||
|
# rpmlint
|
||
|
find -type f -name ".gitignore" -exec rm {} \;
|
||
|
|
||
|
%build
|
||
|
python setup.py build
|
||
|
|
||
|
# rpmlint
|
||
|
find -type f -name ".buildinfo" -exec rm {} \;
|
||
|
|
||
|
%install
|
||
|
python setup.py install -O1 --skip-build --prefix=%{_prefix} --root=%{buildroot}
|
||
|
%fdupes %{buildroot}
|
||
|
|
||
|
%files
|
||
|
%defattr(-,root,root)
|
||
|
%{python_sitelib}/tldextract
|
||
|
%{python_sitelib}/tldextract-*
|
||
|
%{_bindir}/tldextract
|
||
|
|
||
|
%changelog
|