Skip to content

Commit

Permalink
Fix TabError for Python 3
Browse files Browse the repository at this point in the history
Python 3 treats TabErrors as syntax errors.

[flake8](http://flake8.pycqa.org) testing of https://github.com/geekcomputers/Python on Python 3.7.1

$ __flake8 . --count --select=E901,E999,F821,F822,F823 --show-source --statistics__
```
./Google_News.py:14:46: E999 TabError: inconsistent use of tabs and spaces in indentation
	Client=urlopen(xml_news_url, context=context)
                                             ^
1     E999 SyntaxError: invalid syntax
1
```

__E901,E999,F821,F822,F823__ are the "_showstopper_" [flake8](http://flake8.pycqa.org) issues that can halt the runtime with a SyntaxError, NameError, etc. Most other flake8 issues are merely "style violations" -- useful for readability but they do not effect runtime safety.
* F821: undefined name `name`
* F822: undefined name `name` in `__all__`
* F823: local variable name referenced before assignment
* E901: SyntaxError or IndentationError
* E999: SyntaxError -- failed to compile a file into an Abstract Syntax Tree
  • Loading branch information
cclauss committed Jan 5, 2019
1 parent 30af8d0 commit 1321cde
Showing 1 changed file with 22 additions and 22 deletions.
44 changes: 22 additions & 22 deletions Google_News.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,30 +5,30 @@
from urllib.request import urlopen

def news(xml_news_url):
'''Print select details from a html response containing xml
@param xml_news_url: url to parse
'''
context = ssl._create_unverified_context()
Client=urlopen(xml_news_url, context=context)
xml_page=Client.read()
Client.close()
soup_page=soup(xml_page,"xml")
news_list=soup_page.findAll("item")
for news in news_list:
print(f'news title: {news.title.text}')
print(f'news link: {news.link.text}')
print(f'news pubDate: {news.pubDate.text}')
print("+-"*20,"\n\n")

#you can add google news 'xml' URL here for any country/category

'''Print select details from a html response containing xml
@param xml_news_url: url to parse
'''

context = ssl._create_unverified_context()
Client=urlopen(xml_news_url, context=context)
xml_page=Client.read()
Client.close()

soup_page=soup(xml_page,"xml")

news_list=soup_page.findAll("item")

for news in news_list:
print(f'news title: {news.title.text}')
print(f'news link: {news.link.text}')
print(f'news pubDate: {news.pubDate.text}')
print("+-"*20,"\n\n")

#you can add google news 'xml' URL here for any country/category
news_url="https://news.google.com/news/rss/?ned=us&gl=US&hl=en"
sports_url="https://news.google.com/news/rss/headlines/section/topic/SPORTS.en_in/Sports?ned=in&hl=en-IN&gl=IN"

#now call news function with any of these url or BOTH
news(news_url)
news(news_url)
news(sports_url)

0 comments on commit 1321cde

Please sign in to comment.