I have two areas of interest that I am currently focusing on: blockchain and artificial intelligence. To be honest, the emergence of ChatGPT has been much faster than expected. Perhaps one day in the future, we will look back and realize that this year has truly been a very special one.
Some Popular Creations#
Some Personal Experiences#
Lately, I have been using the new model of ChatGPT and I increasingly feel that it is a groundbreaking product. It allows individuals to no longer be heavily reliant on specialized knowledge, thus enabling further unleashing of creativity.
As an example, I have been learning Python web scraping recently, with the intention of scraping all the articles on my blog and outputting them into a table. In the past, I would have to think about how to implement it and conduct multiple tests. But now, all I need to do is express my requirements in natural language to GPT, and it will return a logically sound code. All I have to do is make some adjustments to the parameters to better suit my needs.
Here is the Python web scraping code generated by GPT 4 for me. I only made minor adjustments to the find()
method and changed the class attribute. The efficiency is astonishing.
import requests
from bs4 import BeautifulSoup
import pandas as pd
# Define the target URL
base_url = 'http://chiloh.cn'
page_url = '/page/'
# Send a request and get the total number of pages
response = requests.get(base_url)
soup = BeautifulSoup(response.text, 'html.parser')
last_page = int(soup.find('ol', class_='page-navigator').find_all('a')[-2].text.strip())
# Create an empty list to store the titles, links, and publication dates of each article
data = []
# Iterate through all the pages
for i in range(1, last_page + 1):
url = base_url + page_url + str(i)
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
# Find the titles, links, and publication dates of all the articles
articles = soup.find_all('article', class_='post')
# Iterate through all the articles, extract the titles, links, and publication dates, and add them to the data list
for article in articles:
title = article.find('h2', class_='post-title').text.strip()
link = article.find('a', href=True)['href']
date = article.find('date', class_='post-meta').text.strip()
data.append([title, link, date])
# Convert the data list into a pandas DataFrame
df = pd.DataFrame(data, columns=['Title', 'Link', 'Publication Date'])
# Save the DataFrame as a table file named "typecho.csv"
df.to_csv('typecho.csv', index=False)
Some Personal Thoughts#
I wrote the following reflections in my company's daily report. There were actually some ideas related to the product, but I deleted them because they were sensitive. The general idea is okay, and it reflects my experiences and thoughts over the past six months.
AI has arrived faster than expected. Looking back, many of the products that emerged before seem more like ducks before the advent of plumbing. AI is like the child of human wisdom, constantly approaching what we call the essence, but it is not the essence itself. It can be Everything.
My understanding of it, combined with my own thoughts and experiences from using various products, aligns with the exploration I want to achieve in products:
Reduced Barriers in Information Transmission
The human-like prompt interaction of AI greatly reduces the difficulty of input and output, freeing individuals from the constraints of specialized knowledge and further unleashing creativity. The emergence of super individuals will undoubtedly increase, and the space for expression is visibly expanding.
Accelerated Information Flow
Multi-modal models make the form of information less important, just as we imagined a long time ago: information can be solid, liquid, or even gaseous. Now, information can be text, images, audio, or video.
This means that a new entity called "Flow" may appear, which can circulate between different applications and roles. In one application, it may be text, while in another application, it may be video. However, the essence of what they convey is the same sequence of information.
Cross-modal information, when flowing, will naturally adapt to any shape and become the desired form.
Loss and Restoration of Information Quality
The transformation of information forms inevitably leads to some loss in different time and space. When video is converted into text, the image is lost, just like how we need to see each other through a camera during a meeting. This part is the lost information. However, if we think in reverse, what we are doing is trying to restore the essence of the information as much as possible.
Just like online meetings, where we lose the part where everyone sees each other offline, many video conferencing software solutions include cameras, enhanced eye contact, and noise reduction. Essentially, they are trying to restore the complete information in the context of time and space.
In different time and space, the loss of different information is different. This means that there are various information supplies. We need to customize how to provide them to the next time and space in a free manner. Different scenarios and roles have different information needs. In the process of restoring the complete information, it is more like selecting the more important parts from these supplies and trying our best to restore them in this "scene".
Viewing the circulation of information in different time and space as a form of "death," the generative ability of AGI is about "creation and rebirth." Some information dies in the previous time and space, but new information will be born in the next time and space.